Tuesday, 30 August 2016

How Web Scraping can Help you Detect Weak spots in your Business

How Web Scraping can Help you Detect Weak spots in your Business

Business intelligence is not a new term. Businesses have always been employing experts for analysing the progress, market and industry trends to keep their growth graph going up. Now that we have big data and the tool to gather this data – Web scraping, business intelligence has become even more fruitful. In fact, business intelligence has become a necessary thing to survive now that the competition is fierce in every industry. This is the reason why most enterprises depend on web scraping solutions to gather the data relevant to their businesses. This data is highly insightful and dependable enough to make critical business decisions. Business intelligence from web scraping is definitely a game changer for companies as it can supply relevant and actionable data with minimal effort.

Most businesses have weak spots that are being overlooked or hidden from the plain sight. These weak spots, if left unnoticed can gradually result in the downfall of your company. Here is how you can use data acquired through web scraping to detect weak spots in your business and strengthen them.

Competitor analysis

Many a times, you can find out the flaws in your business by keeping a close watch on your competitors. Competitor analysis is something that we owe to web scraping as the level of competitive intelligence that you can derive from web scraping has never been achievable in the past. With crawling forums and social media sites where your target audience is, you can easily find out if your competitor is leveraging something you have overlooked. Competitor analysis is all about staying updated to each and every action by your competitors, so that you can always be prepared for their next strategic move. If your competitors are doing better than you, this data can be used to make a comparison between your business and theirs which would give you insights on where you lack.

Brand monitoring on Social media

With social media platforms acting like platforms where businesses and customers can interact with each other, the data available on these sites are increasingly becoming relevant to businesses. Any issues in your business operations will also reflect on your customer sentiments. Social media is a goldmine of sentiment data that can help you detect issues within your company. By analysing the posts that mention your brand or product on social media sites, you can identify what department of your company is functioning well and what isn’t.

For example, if you are an Ecommerce portal and many users are complaining about delivery issues from your company on social media, you might want to switch to a better logistics partner who does a better job. The ability to identify such issues at the earliest is extremely important and that’s where web scraping becomes a life saver. With social media scraping, monitoring your brand on social media is easy like never before and the chances of minor issues escalating to bigger ones is almost non-existent. Brand monitoring is extremely crucial if you are a business operating in the online space. Social media scraping solutions are provided by many leading web scraping companies, which totally eliminates the technical complications associated with the process for you.

Finding untapped opportunities

There are always new and untapped markets and opportunities that are relevant to your business. Finding them is not going to be an easy task with manual and outdated methods of research. Web scraping can fill this gap and help you find opportunities that your company can make use of to leverage your reach and progress. Sometimes, targeting the right audience makes all the difference that you’ve been trying to make. By using web crawling to find mentions of your relevant keywords on the web, you can easily stay updated on your niche and fill in to any new untapped markets. Web crawling for keywords is better explained in our previous blog.

Bottom line

It is not a cakewalk to stay ahead in the competition considering how competitive every industry has become in this digital age. It is crucial to find the weak spots and untapped opportunities of your business before someone else does. Of course, you can always use some help from the technology when you need it. Web scraping is clearly the best way to find and gather data that would help you figure these out. With web crawling solutions that can completely take care of this niche process, nothing is stopping you from using the data and insights that the web has in stock for your business.

Source: https://www.promptcloud.com/blog/web-scraping-detect-weak-spots-business

Tuesday, 23 August 2016

ERP Data Conversions - Best Practices and Steps

ERP Data Conversions - Best Practices and Steps

Every company who has gone through an ERP project has gone through the painful process of getting the data ready for the new system. The process of executing this typically goes through the following steps:

(1) Extract or define

(2) Clean and transform

(3) Load

(4) Validate and verify

This process is typically executed multiple times (2 - 5+ times depending on complexity) through an ERP project to ensure that the good data ends up in the new system. If the data is either incorrect, not well enough cleaned or adjusted or loaded incorrectly in to the new system it can cause serious problems as the new system is launched.

(1) Extract or define

This involves extracting the data from legacy systems, which are to be decommissioned. In some cases the data may not exist in a legacy system, as the old process may be spreadsheet-based and has to be created from scratch. Typically this involves creating some extraction programs or leveraging existing reports to get the data in to a format which can be put in to a spreadsheet or a data management application.

(2) Data cleansing

Once extracted it normally reviewed is for accuracy by the business, supported by the IT team, and/or adjusted if incorrect or in a structure which the new ERP system does not understand. Depending on the level of change and data quality this can represent a significant effort involving many business stakeholders and required to go through multiple cycles.

(3) Load data to new system

As the data gets structured to a format which the receiving ERP system can handle the load programs may also be build to handle certain changes as part of the process of getting the data converted in to the new system. Data is loaded in to interface tables and loaded in to the new system's core master data and transactions tables.

When loading the data in to the new system the inter-dependency of the different data elements is key to consider and validate the cross dependencies. Exceptions are dealt with and go in to lessons learned and to modify extracts, data cleansing or load process in to the next cycle.

(4) Validate and verify

The final phase of the data conversion process is to verify the converted data through extracts, reports or manually to ensure that all the data went in correctly. This may also include both internal and external audit groups and all the key data owners. Part of the testing will also include attempting to transact using the converted data successfully.

The topmost success factors or best practices to execute a successful conversion I would prioritize as follows:

(1) Start the data conversion early enough by assessing the quality of the data. Starting too late can result in either costly project delays or decisions to load garbage and "deal with it later" resulting in an increase in problems as the new system is launched.

(2) Identify and assign data owners and customers (often forgotten) for the different elements. Ensure that not only the data owners sign-off on the data conversions but that also the key users of the data are involved in reviewing the selection criteria's, data cleansing process and load verification.

(3) Run sufficient enough rounds of testing of the data, including not only validating the loads but also transacting with the converted data.

(4) Depending on the complexity, evaluate possible tools beyond spreadsheets and custom programming to help with the data conversion process for cleansing, transformation and load process.

(5) Don't under-estimate the effort in cleansing and validating the converted data.

(6) Define processes and consider other tools to help how the accuracy of the data will be maintained after the system goes live.

Source: http://ezinearticles.com/?ERP-Data-Conversions---Best-Practices-and-Steps&id=7263314

Wednesday, 10 August 2016

Getting Data from the Web

Getting Data from the Web

You’ve tried everything else, and you haven’t managed to get your hands on the data you want. You’ve found the data on the web, but, alas — no download options are available and copy-paste has failed you. Fear not, there may still be a way to get the data out. For example you can:

Get data from web-based APIs, such as interfaces provided by online databases and many modern web applications (including Twitter, Facebook and many others). This is a fantastic way to access government or commercial data, as well as data from social media sites.

Extract data from PDFs. This is very difficult, as PDF is a language for printers and does not retain much information on the structure of the data that is displayed within a document. Extracting information from PDFs is beyond the scope of this book, but there are some tools and tutorials that may help you do it.

Screen scrape web sites. During screen scraping, you’re extracting structured content from a normal web page with the help of a scraping utility or by writing a small piece of code. While this method is very powerful and can be used in many places, it requires a bit of understanding about how the web works.

With all those great technical options, don’t forget the simple options: often it is worth to spend some time searching for a file with machine-readable data or to call the institution which is holding the data you want.

In this chapter we walk through a very basic example of scraping data from an HTML web page.
What is machine-readable data?

The goal for most of these methods is to get access to machine-readable data. Machine readable data is created for processing by a computer, instead of the presentation to a human user. The structure of such data relates to contained information, and not the way it is displayed eventually. Examples of easily machine-readable formats include CSV, XML, JSON and Excel files, while formats like Word documents, HTML pages and PDF files are more concerned with the visual layout of the information. PDF for example is a language which talks directly to your printer, it’s concerned with position of lines and dots on a page, rather than distinguishable characters.
Scraping web sites: what for?

Everyone has done this: you go to a web site, see an interesting table and try to copy it over to Excel so you can add some numbers up or store it for later. Yet this often does not really work, or the information you want is spread across a large number of web sites. Copying by hand can quickly become very tedious, so it makes sense to use a bit of code to do it.

The advantage of scraping is that you can do it with virtually any web site — from weather forecasts to government spending, even if that site does not have an API for raw data access.
What you can and cannot scrape

There are, of course, limits to what can be scraped. Some factors that make it harder to scrape a site include:

Badly formatted HTML code with little or no structural information e.g. older government websites.

Authentication systems that are supposed to prevent automatic access e.g. CAPTCHA codes and paywalls.

Session-based systems that use browser cookies to keep track of what the user has been doing.

A lack of complete item listings and possibilities for wildcard search.

Blocking of bulk access by the server administrators.

Another set of limitations are legal barriers: some countries recognize database rights, which may limit your right to re-use information that has been published online. Sometimes, you can choose to ignore the license and do it anyway — depending on your jurisdiction, you may have special rights as a journalist. Scraping freely available Government data should be fine, but you may wish to double check before you publish. Commercial organizations — and certain NGOs — react with less tolerance and may try to claim that you’re “sabotaging” their systems. Other information may infringe the privacy of individuals and thereby violate data privacy laws or professional ethics.
Tools that help you scrape

There are many programs that can be used to extract bulk information from a web site, including browser extensions and some web services. Depending on your browser, tools like Readability (which helps extract text from a page) or DownThemAll (which allows you to download many files at once) will help you automate some tedious tasks, while Chrome’s Scraper extension was explicitly built to extract tables from web sites. Developer extensions like FireBug (for Firefox, the same thing is already included in Chrome, Safari and IE) let you track exactly how a web site is structured and what communications happen between your browser and the server.

ScraperWiki is a web site that allows you to code scrapers in a number of different programming languages, including Python, Ruby and PHP. If you want to get started with scraping without the hassle of setting up a programming environment on your computer, this is the way to go. Other web services, such as Google Spreadsheets and Yahoo! Pipes also allow you to perform some extraction from other web sites.
How does a web scraper work?

Web scrapers are usually small pieces of code written in a programming language such as Python, Ruby or PHP. Choosing the right language is largely a question of which community you have access to: if there is someone in your newsroom or city already working with one of these languages, then it makes sense to adopt the same language.

While some of the click-and-point scraping tools mentioned before may be helpful to get started, the real complexity involved in scraping a web site is in addressing the right pages and the right elements within these pages to extract the desired information. These tasks aren’t about programming, but understanding the structure of the web site and database.

When displaying a web site, your browser will almost always make use of two technologies: HTTP is a way for it to communicate with the server and to request specific resource, such as documents, images or videos. HTML is the language in which web sites are composed.
The anatomy of a web page

Any HTML page is structured as a hierarchy of boxes (which are defined by HTML “tags”). A large box will contain many smaller ones — for example a table that has many smaller divisions: rows and cells. There are many types of tags that perform different functions — some produce boxes, others tables, images or links. Tags can also have additional properties (e.g. they can be unique identifiers) and can belong to groups called ‘classes’, which makes it possible to target and capture individual elements within a document. Selecting the appropriate elements this way and extracting their content is the key to writing a scraper.

Viewing the elements in a web page: everything can be broken up into boxes within boxes.

To scrape web pages, you’ll need to learn a bit about the different types of elements that can be in an HTML document. For example, the <table> element wraps a whole table, which has <tr> (table row) elements for its rows, which in turn contain <td> (table data) for each cell. The most common element type you will encounter is <div>, which can basically mean any block of content. The easiest way to get a feel for these elements is by using the developer toolbar in your browser: they will allow you to hover over any part of a web page and see what the underlying code is.

Tags work like book ends, marking the start and the end of a unit. For example <em> signifies the start of an italicized or emphasized piece of text and </em> signifies the end of that section. Easy.

An example: scraping nuclear incidents with Python

NEWS is the International Atomic Energy Agency’s (IAEA) portal on world-wide radiation incidents (and a strong contender for membership in the Weird Title Club!). The web page lists incidents in a simple, blog-like site that can be easily scraped.

To start, create a new Python scraper on ScraperWiki and you will be presented with a text area that is mostly empty, except for some scaffolding code. In another browser window, open the IAEA site and open the developer toolbar in your browser. In the “Elements” view, try to find the HTML element for one of the news item titles. Your browser’s developer toolbar helps you connect elements on the web page with the underlying HTML code.

Investigating this page will reveal that the titles are <h4> elements within a <table>. Each event is a <tr> row, which also contains a description and a date. If we want to extract the titles of all events, we should find a way to select each row in the table sequentially, while fetching all the text within the title elements.

In order to turn this process into code, we need to make ourselves aware of all the steps involved. To get a feeling for the kind of steps required, let’s play a simple game: In your ScraperWiki window, try to write up individual instructions for yourself, for each thing you are going to do while writing this scraper, like steps in a recipe (prefix each line with a hash sign to tell Python that this not real computer code). For example:

  # Look for all rows in the table
  # Unicorn must not overflow on left side.

Try to be as precise as you can and don’t assume that the program knows anything about the page you’re attempting to scrape.

Once you’ve written down some pseudo-code, let’s compare this to the essential code for our first scraper:

  import scraperwiki
  from lxml import html

In this first section, we’re importing existing functionality from libraries — snippets of pre-written code. scraperwiki will give us the ability to download web sites, while lxml is a tool for the structured analysis of HTML documents. Good news: if you are writing a Python scraper with ScraperWiki, these two lines will always be the same.

  url = "http://www-news.iaea.org/EventList.aspx"
  doc_text = scraperwiki.scrape(url)
  doc = html.fromstring(doc_text)

Next, the code makes a name (variable): url, and assigns the URL of the IAEA page as its value. This tells the scraper that this thing exists and we want to pay attention to it. Note that the URL itself is in quotes as it is not part of the program code but a string, a sequence of characters.

We then use the url variable as input to a function, scraperwiki.scrape. A function will provide some defined job — in this case it’ll download a web page. When it’s finished, it’ll assign its output to another variable, doc_text. doc_text will now hold the actual text of the website — not the visual form you see in your browser, but the source code, including all the tags. Since this form is not very easy to parse, we’ll use another function, html.fromstring, to generate a special representation where we can easily address elements, the so-called document object model (DOM).

  for row in doc.cssselect("#tblEvents tr"):
  link_in_header = row.cssselect("h4 a").pop()
  event_title = link_in_header.text
  print event_title

In this final step, we use the DOM to find each row in our table and extract the event’s title from its header. Two new concepts are used: the for loop and element selection (.cssselect). The for loop essentially does what its name implies; it will traverse a list of items, assigning each a temporary alias (row in this case) and then run any indented instructions for each item.

The other new concept, element selection, is making use of a special language to find elements in the document. CSS selectors are normally used to add layout information to HTML elements and can be used to precisely pick an element out of a page. In this case (Line. 6) we’re selecting #tblEvents tr which will match each <tr> within the table element with the ID tblEvents (the hash simply signifies ID). Note that this will return a list of <tr> elements.

As can be seen on the next line (Line. 7), where we’re applying another selector to find any <a> (which is a hyperlink) within a <h4> (a title). Here we only want to look at a single element (there’s just one title per row), so we have to pop it off the top of the list returned by our selector with the .pop() function.

Note that some elements in the DOM contain actual text, i.e. text that is not part of any markup language, which we can access using the [element].text syntax seen on line 8. Finally, in line 9, we’re printing that text to the ScraperWiki console. If you hit run in your scraper, the smaller window should now start listing the event’s names from the IAEA web site.

  figs/incoming/04-DD.png
  Figure 58. A scraper in action (ScraperWiki)

You can now see a basic scraper operating: it downloads the web page, transforms it into the DOM form and then allows you to pick and extract certain content. Given this skeleton, you can try and solve some of the remaining problems using the ScraperWiki and Python documentation:

Can you find the address for the link in each event’s title?

Can you select the small box that contains the date and place by using its CSS class name and extract the element’s text?

ScraperWiki offers a small database to each scraper so you can store the results; copy the relevant example from their docs and adapt it so it will save the event titles, links and dates.

The event list has many pages; can you scrape multiple pages to get historic events as well?

As you’re trying to solve these challenges, have a look around ScraperWiki: there are many useful examples in the existing scrapers — and quite often, the data is pretty exciting, too. This way, you don’t need to start off your scraper from scratch: just choose one that is similar, fork it and adapt to your problem.

Source: http://datajournalismhandbook.org/1.0/en/getting_data_3.html

2>


How to Scrape a Website into Excel without programming

This web scraping tutorial will teach you visually step by step how to scrape or extract or pull data from websites using import.io(Free Tool) without programming skills into Excel.

Personally, I use web scraping for analysing my competitors’ best-performing blog posts or content such as what blog posts or content received most comments or social media shares.

In this tutorial,We will scrape the following data from a blog:

    All blog posts URLs.
    Authors names for each post.
    Blog posts titles.
    The number of social media shares each post received.

Then we will use the extracted data to determine what are the popular blog posts and their authors,which posts received much engagement from users through social media shares and on page comments.

Let’s get started.

Step 1:Install import.io app

The first step is to install import.io app.A free web scraping tool and one of the best web scraping software.It is available for Windows,Mac and Linux platforms.Import.io offers advanced data extraction features without coding by allowing you to create custom APIs or crawl entire websites.

After installation, you will need to sign up for an account.It is completely free so don’t worry.I will not cover the installation process.Once everything is set correctly you will see something similar to the window below after your first login.

Step 2:Choose how to scrape data using import.io extractor

With import.io you can do data extraction by creating custom APIs or crawling the entire websites.It comes equipped with different tools for data extraction such as magic,extractor,crawler and connector.

In this tutorial,I will use a tool called “extractor” to create a custom API for our data extraction process.

To get started click the “new” red button on the right top of the page and then click “Start Extractor” button on the pop-up window.

After clicking  “Start Extractor” the Import.io app internal browser window will open as shown below.

Step 3:Data scraping process

Now after the import.io browser is open navigate to the blog URL you want to scrape data from. Then once you already navigated to the target blog URL turn on extraction.In this tutorial,I will use this blog URL bongo5.com  for data extraction.

You can see from the window below I already navigated to www.bongo5.com but extraction switch is still off.

Turn extraction switch “ON” as shown in the window below and move to the next step.

Step 4:Training the “columns” or specifying the data we want to scrape

In this step,I will specify exactly what kind of data I want to scrape from the blog.On import.io app specifying the data you want to scrape is referred to as “training the columns”.Columns represent the data set I want to scrape(post titles,authors’ names and posts URLs).

In order to understand this step, you need to know the difference between a blog page and a blog post.A page might have a single post or multiple posts depending on the blog configuration.

A blog might have several blog posts,even hundreds or thousands of posts.But I will take only one session to train the “extractor” about the data I want to extract.I will do so by using an import.io visual highlighter.Once the data extraction is turned on the-the highlighter will appear by default.

I will do the training session for a single post in a single blog page with multiple posts then the extractor will extract data automatically for the remaining posts on the “same” blog page.
Step 4a:Creating “post_title” column

I will start by renaming “my_column” into the name of the data I want to scrape.Our goal in this tutorial is to scrape the blog posts titles,posts URLs,authors names and get social statistics later so I will create columns for posts titles,posts URLs,authors names.Later on, I will teach you how to get social statistics for the post URLs.

After editing “my_column” into “post_title” then point the mouse cursor over to any of the Posts title on the same blog page and the visual highlighter will automatically appear.Using the highlighter I can select the data I want to extract.

You can see below I selected one of the blog post titles on the page.The rectangular box with orange border is the visual highlighter.

The app will ask you how is the data arranged on the page.Since I have more than one post in a single page then you have rows of repeating data.This blog is having 25 posts per page.So you will select “many rows”.Sometimes you might have a single post on a page for that case you need to select “Just one row”.

Source: http://nocodewebscraping.com/web-scraping-for-dummies-tutorial-with-import-io-without-coding/

Thursday, 4 August 2016

Are You Screen Scraping or Data Mining?

Are You Screen Scraping or Data Mining?

Many of us seem to use these terms interchangeably but let’s make sure we are clear about the differences that make each of these approaches different from the other.

Basically, screen scraping is a process where you use a computer program or software to extract information from a website.  This is different than crawling, searching or mining a site because you are not indexing everything on the page – a screen scraper simply extracts precise information selected by the user.  Screen scraping is a useful application when you want to do real-time, price and product comparisons, archive web pages, or acquire data sets that you want to evaluate or filter.

When you perform screen scraping, you are able to scrape data more directly and, you can automate the process if you are using the right solution. Different types of screen scraping services and solutions offer different ways of obtaining information. Some look directly at the html code of the webpage to grab the data while others use more advanced, visual abstraction techniques that can often avoid “breakage” errors when the web source experiences a programming or code change.

On the other hand, data mining is basically the process of automatically searching large amounts of information and data for patterns. This means that you already have the information and what you really need to do is analyze the contents to find the useful things you need. This is very different from screen scraping as screen scraping requires you to look for the data, collect it and then you can analyze it.

Data mining also involves a lot of complicated algorithms often based on various statistical methods. This process has nothing to do with how you obtain the data. All it cares about is analyzing what is available for evaluation.

Screen scraping is often mistaken for data mining when, in fact, these are two different things. Today, there are online services that offer screen scraping. Depending on what you need, you can have it custom tailored to meet your specific needs and perform precisely the tasks you want. But screen scraping does not guarantee any kind of analysis of the data.

Source: http://www.connotate.com/are-you-screen-scraping-or-data-mining/

Monday, 1 August 2016

Scraping LinkedIn Public Profiles for Fun and Profit

Scraping LinkedIn Public Profiles for Fun and Profit

Reconnaissance and Information Gathering is a part of almost every penetration testing engagement. Often, the tester will only perform network reconnaissance in an attempt to disclose and learn the company's network infrastructure (i.e. IP addresses, domain names, and etc), but there are other types of reconnaissance to conduct, and no, I'm not talking about dumpster diving. Thanks to social networks like LinkedIn, OSINT/WEBINT is now yielding more information. This information can then be used to help the tester test anything from social engineering to weak passwords.

In this blog post I will show you how to use Pythonect to easily generate potential passwords from LinkedIn public profiles. If you haven't heard about Pythonect yet, it is a new, experimental, general-purpose dataflow programming language based on the Python programming language. Pythonect is most suitable for creating applications that are themselves focused on the "flow" of the data. An application that generates passwords from the employees public LinkedIn profiles of a given company - have a coherence and clear dataflow:

(1) Find all the employees public LinkedIn profiles → (2) Scrap all the employees public LinkedIn profiles → (3) Crunch all the data into potential passwords

Now that we have the general concept and high-level overview out of the way, let's dive in to the details.

Finding all the employees public LinkedIn profiles will be done via Google Custom Search Engine, a free service by Google that allows anyone to create their own search engine by themselves. The idea is to create a search engine that when searching for a given company name - will return all the employees public LinkedIn profiles. How? When creating a Google Custom Search Engine it's possible to refine the search results to a specific site (i.e. 'Sites to search'), and we're going to limit ours to: linkedin.com. It's also possible to fine-tune the search results even further, e.g. uk.linkedin.com to find only employees from United Kingdom.

The access to the newly created Google Custom Search Engine will be made using a free API key obtained from Google API Console. Why go through the Google API? because it allows automation (No CAPTCHA's), and it also means that the search-result pages will be returned as JSON (as oppose to HTML). The only catch with using the free API key is that it's limited to 100 queries per day, but it's possible to buy an API key that will not be limited.

Scraping the profiles is a matter of iterating all over the hCards in all the search-result pages, and extracting the employee name from each hCard. Whats is a hCard? hCard is a micro format for publishing the contact details of people, companies, organizations, and places. hCard is also supported by social networks such as Facebook, Google+, LinkedIn and etc. for exporting public profiles. Google (when indexing) parses hCard, and when relevant, uses them in search-result pages. In other words, when search-result pages include LinkedIn public profiles, it will appear as hCards, and could be easily parsed.

Let's see the implementation of the above:

#!/usr/bin/python
#
# Copyright (C) 2012 Itzik Kotler
#
# scraper.py is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# scraper.py is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with scraper.py.  If not, see <http://www.gnu.org/licenses/>.

"""Simple LinkedIn public profiles scraper that uses Google Custom Search"""

import urllib
import simplejson


BASE_URL = "https://www.googleapis.com/customsearch/v1?key=<YOUR GOOGLE API KEY>&cx=<YOUR GOOGLE SEARCH ENGINE CX>"


def __get_all_hcards_from_query(query, index=0, hcards={}):

    url = query

    if index != 0:

        url = url + '&start=%d' % (index)

    json = simplejson.loads(urllib.urlopen(url).read())

    if json.has_key('error'):

        print "Stopping at %s due to Error!" % (url)

        print json

    else:

        for item in json['items']:

            try:

                hcards[item['pagemap']['hcard'][0]['fn']] = item['pagemap']['hcard'][0]['title']

            except KeyError as e:

                pass

        if json['queries'].has_key('nextPage'):

            return __get_all_hcards_from_query(query, json['queries']['nextPage'][0]['startIndex'], hcards)

    return hcards


def get_all_employees_by_company_via_linkedin(company):

    queries = ['"at %s" inurl:"in"', '"at %s" inurl:"pub"']

    result = {}

    for query in queries:

        _query = query % company

        result.update(__get_all_hcards_from_query(BASE_URL + '&q=' + _query))

    return list(result)

Replace <YOUR GOOGLE API KEY> and <YOUR GOOGLE SEARCH ENGINE CX> in the code above with your Google API Key and Google Search Engine CX respectively, save it to a file called scraper.py, and you're ready!

To kick-start, here is a simple program in Pythonect (that utilizes the scraper module) that searchs and prints all the Pythonect company employees full names:

"Pythonect" -> scraper.get_all_employees_by_company_via_linkedin -> print

The output should be:

Itzik Kotler

In my LinkedIn Profile, I have listed Pythonect as a company that I work for, and since no one else is working there, when searching for all the employees of Pythonect company - only my LinkedIn profile comes up.
For demonstration purposes I will keep using this example (i.e. "Pythonect" company, and "Itzik Kotler" employee), but go ahead and replace Pythonect with other, more popular, companies names and see the results.

Now that we have a working skeleton, let's take its output and start crunching it. Keep in mind that every "password generation forumla" is merely a guess. The examples below are only a sampling of what can be done. There are, obviously many more possibilities and you are encouraged to experiment. But first, let's normalize the output - this way it's going to be consistent before operations are performed on it:

"Pythonect" -> scraper.get_all_employees_by_company_via_linkedin -> string.lower(''.join(_.split()))

The normalization procedure is short and simple: convert the string to lowercase and remove any spaces, and so the output should be now:

itzikkotler

As for data manipulation, out of the box (Thanks to The Python Standard Library) we've got itertools and it's combinatoric generators. Let's start by applying itertools.product:

"Pythonect" -> scraper.get_all_employees_by_company_via_linkedin -> string.lower(''.join(_.split())) -> itertools.product(_, repeat=4) -> print

The code above will generate and print every 4 characters password from the letters: i, t, z, k, o, t, l , e, r. However, it won't cover passwords with uppercase letters in it. And so, here's a simple and straightforward implementation of a cycle_uppercase function that cycles the input letters yields a copy of the input with letter in uppercase:

def cycle_uppercase(i):
    s = ''.join(i)
    for idx in xrange(0, len(s)):
        yield s[:idx] + s[idx].upper() + s[idx+1:]

To use it, save it to a file called itertools2.py, and then simply add it to the Pythonect program after the itertools.product(_, repeat=4) block, as follows:

"Pythonect" -> scraper.get_all_employees_by_company_via_linkedin \
    -> string.lower(''.join(_.split())) \
        -> itertools.product(_, repeat=4) \
            -> itertools2.cycle_uppercase \
                -> print

Now, the program will also cover passwords that include a single uppercase letter in it. Moving on with the data manipulation, sometimes the password might contain symbols that are not found within the scrapped data. In this case, it is necessary to build a generator that will take the input and add symbols to it. Here is a short and simple generator implemented as a Generator Expression:

[_ + postfix for postfix in ['123','!','$']]

To use it, simply add it to the Pythonect program after the itertools2.cycle_uppercase block, as follows:

"Pythonect" -> scraper.get_all_employees_by_company_via_linkedin \
    -> string.lower(''.join(_.split())) \
        -> itertools.product(_, repeat=4) \
            -> itertools2.cycle_uppercase \
                -> [_ + postfix for postfix in ['123','!','$']] \
                    -> print

The result is that now the program adds the strings: '123', '!', and '$' to every generated password, which increases the chances of guessing the user's right password, or not, depends on the password :)

To summarize, it's possible to take OSINT/WEBINT data on a given person or company and use it to generate potential passwords, and it's easy to do with Pythonect. There are, of course, many different ways to manipulate the data into passwords and many programs and filters that can be used. In this aspect, Pythonect being a flow-oriented language makes it easy to experiment and research with different modules and programs in a "plug and play" manner.

Source:http://blog.ikotler.org/2012/12/scraping-linkedin-public-profiles-for.html