Monthly Archives: January 2010

Hadoop Tutorial Series, Issue #4: To Use Or Not To Use A Combiner

combinerWelcome to the fourth issue of the Hadoop Tutorial Series. Combiners are another important Hadoop’s feature that every hadoop developer should be aware of. The primary goal of combiners is to optimize/minimize the number of key value pairs that will be shuffled accross the network between mappers and reducers and thus to save as most bandwidth as possible.

Indeed, to give you the intuition of why combiner helps reducing the number of data sent to the reducers, imagine the word count example on a text containing one million times the word “the”. Without combiner the mapper will send one million key/value pairs of the form <the,1>. With combiners, it will potentially send much less key/value pairs of the form <the,N> with N a number potentially much bigger than 1. That’s just the intuition (see the references at the end of the post for more details).

Simply speaking a combiner can be considered as a “mini reducer” that will be applied potentially several times still during the map phase before to send the new (hopefully reduced) set of key/value pairs to the reducer(s). This is why a combiner must implement the Reducer interface (or extend the Reducer class as of hadoop 0.20).

In general you can even use the same reducer method as both your reducer and your combiner. This is the case for the word count example where using a combiner remains to add a single line of code in your main method:

conf.setCombinerClass(Reduce.class);

where conf is your JobConf, or, if you use hadoop 0.20.1:

job.setCombinerClass(Reduce.class);

where job is your Job built with a customized Configuration.

That sounds pretty simple and useful and at first look you would be ready to use combiners all the time by adding this simple line, but there is a small catch. The first kind of reducers that comes naturally as a counter example of using combiner is the “mean reducer” that computes the mean of all the values associated with an given key.

Indeed, suppose 5 key/value pairs emitted from the mapper for a given key k: <k,40>, <k,30>, <k,20>, <k,2>, <k,8>. Without combiner, when the reducer will receive the list <k,{40,30,20,2,8}>, the mean output will be 20, but if a combiner were applied before on the two sets (<k,40>, <k,30>, <k,20>) and (<k,2>, <k,8>) separately, then the reducer would have received the list <k,{30,5}> and the output would have been different (17.5) which is an unexpected behavior.

More generally, combiners can be used when the function you want to apply is both commutative and associative (that’s pretty intuitive to understand why). That’s the case for the addition function, this is why the word count example can benefit from combiners but not for the mean function (which is not associative as shown in the counter example above).

Note that for the mean function you can use a workaround for using combiners by using two separate reduce methods, a first one that would be used as the addition function (and thus that can be set as the combiner) that would emit the intermediate sum as the key and the number of addition involved as the value, and a second reduce function that would compute the mean by taking into account the number of addition involved (see the references for more details on that).

As usual in this series, let’s observe the lesson learned in action using our learning playground. For that you can use the original word count example (or its hadoop 0.20.1 version that we used in the previous issue), add it the single combine line as specified earlier in the post and run it on our moby-dick mascot. Here what we can see at the end of the execution:

combiner
Output of the word count example when using a combiner. Click to enlarge.

Now that you understand what counters are, if you click to enlarge the picture, you’ll see the value of two counters: Combine input records=215137 and Combine output records=33783. That’s a pretty serious reduction of the number of key/value pairs to send to the reducers. You can easily imagine the impact for much larger jobs (see the reference below for real numbers).

Enjoy combiners, whenever you can…

References

  • See the 4th tip of this must read blog post by Todd Lipcon for feeling better the benefit of combiners on a 40GB wordcount job benchmark.
  • For a deeper understanding of when and how combiners are used in the mapReduce data flow, check this section of the (quiet heavy but) excellent Yahoo! hadoop tutorial.
  • To extend the intuition given in the post on why combiners help, you can go over this walk-through.
  • Both Hadoop the definitive guide and Hadoop in Action contains interesting information on combiners (part of both of them inspired this post). In particular the first contains a great section on when exactly the combiners comes into play in the mapReduce data flow. The second contains a full code of the mean function workaround mentioned above.

Medicines are prescribed in order to treat or prevent infections that are proven or strongly suspected bacterial infection or a prophylactic indication is unlikely to provide order viagra prescription benefit to the patient and increases the risk of the esophageal cancer. On-site technical support is offered by viagra 50mg canada professional and qualified technician who very well knows how to fix any technical glitch that may hamper your system’s operation. Also it works on supplying the right quantity of blood to the penis. http://cute-n-tiny.com/tag/top-10/page/2/ online viagra order Kamagra 100 mg helps in the improvement of potency during sexual sildenafil cheap disorder.

Hadoop Tutorial Series, Issue #3: Counters In Action

Note: This post has been updated with a code working for hadoop 0.20.1.

In this 3rd issue of the hadoop tutorial series, we’ll speak about a very simple but very useful hadoop’s feature: counters.

Even if you have never defined any counters in hadoop, you can see some of them each time you are running an hadoop job. Indeed, here is what you can see from the client console at the end of the execution of a job (can also be seen from the web interface):

counters
Hadoop internal counters at the end of a job (Click to enlarge).

As you can see, 18 internal counters are presented inside different groups. For instance, you can see a section “Job Counters” with three different counters giving basic information about the job like the number of mappers and reducers. In that example, “Job Counters” is called the group of the counter and “Launched reduce tasks” (for instance) is properly the name of the counter.

It is very handy to define your own counters to track any kind of statistics about the records you are manipulating in the mapper and the reducer. The most natural use of that is to use counters to track the number of malformed records.

If you are executing a job  and you see an abnormally high number of malformed records, it can give a good hint that you perhaps have a bug in your code or some problem with your data (note this is actually a much simpler way to spot issues than tracking error messages in a distributed set of log files). But you can actually use counters for any kind of other statistics on your records.

One easy way to define your own counters from your Java code is:

  • Declaring an enum representing your counters. The enum name is the group of the counter, and each field of the enum is the name of the counter that will be reported in this same group
  • Incrementing the desired counters from your map and reduce methods through the Context of your mapper or reducer (in previous hadoop version it was through the Reporter.incrCounter() method, but the reporter no longer exists in hadoop 0.20)

These performance hazards also have the capability to limit the general levitra 5mg browse around over here career potential of the individual. The dose is typically taken 30 to 60 prior viagra pills price minute’s sexual action. Most people like to keep their troubles with erection cipla viagra india under the wraps for fear of judgment. It contains discount viagra high levels of the antioxidants vitamins A, C and E, and carotenoids.
So let’s see an example. We’ll take the word count example revised for version 0.20.1 to illustrate the use of counters. We will create a counter group called WordsNature that will count how many unique tokens there is in all, how many unique tokens starts with a digit and how many unique tokens starts with a letter.

So our enum declaration will look like that:

 static enum WordsNature { STARTS_WITH_DIGIT, STARTS_WITH_LETTER, ALL }

We will also need a very basic StringUtils class:

package com.philippeadjiman.hadooptraining;

public class StringUtils {

	public static boolean startsWithDigit(String s){
		if( s == null || s.length() == 0 )
			return false;

		return Character.isDigit(s.charAt(0));
	}

	public static boolean startsWithLetter(String s){
		if( s == null || s.length() == 0 )
			return false;

		return Character.isLetter(s.charAt(0));
	}

}

Since we are interested in unique tokens, we will put the code related with the counter into the reduce method. So here how the reduce method will look like:

public void reduce(Text key, Iterable values, Context context)
	throws IOException, InterruptedException {

	int sum = 0;
	String token = key.toString();
	if( StringUtils.startsWithDigit(token) ){
		context.getCounter(WordsNature.STARTS_WITH_DIGIT).increment(1);
	}
	else if( StringUtils.startsWithLetter(token) ){
		context.getCounter(WordsNature.STARTS_WITH_LETTER).increment(1);
	}
	context.getCounter(WordsNature.ALL).increment(1);
	for (IntWritable value : values) {
		sum += value.get();
	}
	context.write(key, new IntWritable(sum));
}

Here is the code of the WordCountWithCounter that include this code.

If you want to run it inside our learning playground you’ll just have to update the pom with hadoop latest version:

<dependency>
<groupId>org.apache.mahout.hadoop</groupId>
<artifactId>hadoop-core</artifactId>
<version>0.20.1</version>
</dependency>

So here is the result after running the code with, as input, the whole text of moby dick:

jobResultsWithCounters
We can now see our home made counters. (Click to enlarge)

So we can see now that we have 33783 unique tokens, 32511 starting with a letter and 263 starting with a digit. What about the 1009 others?? Well, because the word count example use a basic StringTokenizer that splits tokens at spaces, a lot of words simply starts with a ‘(‘ or with a ‘[‘ and even with ‘–‘. To solve that you can for instance use a lucene StandardAnalyzer.

You should now be able to easily implements your own counters for tracking bad records/missing values, debugging or gathering any kind of statistics around your job.

See you soon for another issue…

How To Build A Relevant Real Time Search Engine Prototype In Few Hundreds Lines Of Code

gootterBy the end of the post you’ll find the code along with a small command line JAVA program to play with, but let me first describe the specifications of the real time search engine prototype that I’m targeting here.

Basically it should take as input a  search query and return as output a ranked set of URLs that would correspond to the latest hot news around that search query.

In some way it is similar to what you would expect to find on google news or in one of the dozens real time search engine that were released last year (let’s cite oneriot, crowdeye and collecta).

The goal of my prototype is to demonstrate how to leverage twitter and a simple ranking algorithm to obtain most of the time relevant URLs in response of hot queries, without having to crawl a single web page! As my primary target is relevancy, I won’t invest any effort on performance or scalability of the prototype (retrieved results will be build at query time).

High level description of the prototype

Basically what I did is to use the twitter API through a java library called twitter4j to retrieve all the latest tweets containing the input query and that contains a link. For very hot queries, you’re likely to get a lot of those (I put a limit of the last 150 but you’ll be able to change it). Once I got my “link farm”, what I do is to build a basic ranking algorithm that would rank first the URLs that are the most referenced.

As most of the URLs in tweets are shortened URLs, the trick is to spot the same URLs that were shortened by different shortening services. For instance both of the following shortened URLs points to a same page of my blog: http://tinyurl.com/yajkgeg and http://bit.ly/SmHw6. It can sounds as a corner case but it actually happens all the time on hot queries. So the idea is to convert all the short URLs in their expanded version. To see how to write an universal URL expander in JAVA that would work for the 90 + existing URL shortening services check the post that is referenced by the two short URLs above.

Note that you can improve the ranking algorithm in tons of way, by exploiting the text in the tweets or who actually wrote the tweet (reputation) or using other sources like digg and much more, but as we’ll see, even in its simplest form, the ranking algorithm presented above works pretty well.

Playing with some hot queries

To find some hot queries to play with, you can for instance take one of the google hot trends queries (unfortunately down from 100 to 40 to 20). Let’s try with a very hot topic while I’m writing this post: the google Nexus One phone that was about to be presented to the press two days after I started to wrote this post.

Below I have compiled the results obtained respectively by Google News, OneRiot and my toy prototype on the query “nexus one”. Click the picture to enlarge.

OneRiotGoogleNewsProto_NexusOne.jpg
Comparing the results on Nexus One. Click to enlarge.

I hope you enjoyed my killer UI :). But let’s focus on the three URLs corresponding of the first result of each one:

Given the fact that at the time I issued the query, the Nexus one was not yet released, I would say that the article that the prototype found is the best one since it is the only one that present an exclusive video demonstrating the not yet released phone. This is also why so much people were twitting about this link: because it was the best at that precise time! We’ll see even more in the next section.

Before, let’s try with another hot query today (in the top 20 hottest queries of google hot trends): “byron de la beckwith”.

That time, it is not clear what is the story/news hidden behind that hot query but running it on the prototype gives as the first link the article below (click on the picture if you want to see the full article).

byron de la beckwith
First ranked result by the prototype for the query "byron de la beckwith". Click to follow the article.

Again this is a very relevant result (oneRiot and Google News gave the same one at that time).

The temporal aspect of hot queries

What is interesting with hot queries is that you expect the result to change even within a short amount of time. Indeed, any story or breaking news generally evolve as new elements comes in. As promised let’s follow our “nexus one” query.

In the previous section, the prototype’s first result was a very relevant article from engadget. I relaunched the same query, but after 12 hours. The first ranked result returned by my prototype gives me now a different result: still another article from engadget (see picture below), but that time with a much more in depth review of the phone with more videos including a very funny comparison between the android, iphone and nexus one.

Then I waited for Google doing its press conference one day later. I issued the query again. Can you guess what was the first link given by my prototype? You got it, the official Google Nexus One website.

nexusOneAfter12hours
The first link given by the prototype on "nexus one" about one day before its official presentation by Google. Click to follow the article

Again this is not a corner case. This temporal aspect happens all the time, for any type of breaking news or events. As a last example of that phenomenon, let’s take the movie avatar. The first days before and after that the movie were released, all you got is links to see the trailer or even the movie. Now, few weeks after, what you get is a very fast changing list of links around fun pictures of parodies of the movie with title like “Do you want to date my avatar” (picture below) or a letter attempting to prove that avatar is  actually Pocahontas in 3d :).

wantYouDataMyAvatar
Few weeks after the release of the Avatar movie, first links are a fast changing list of parodies

Playing by yourself with the prototype

If you just want to run the prototype through the command line

You must  have java 6 installed (you can check by opening a console and type java -version). On recent mac, see those instructions for having java 6 ready to use in a snap.
Then just download this zip archive: jarsDependencies.zip.
Save it and extract it somewhere in your computer. It will create a directory named prototypeJars.
Open a command prompt. Go inside the directory prototypeJars.

If you are on windows, just type:

java -cp "*;" com.philippeadjiman.rtseproto.RealTimeSEPrototype "nexus one" 150 OFF

If you are on Linux or Mac just type:

java -cp "*" com.philippeadjiman.rtseproto.RealTimeSEPrototype "nexus one" 150 OFF

You’ll notice the three last arguments (all are mandatory):

  • “nexus one”: is the query. Type whatever you want here but keep the quotes.
  • 150: is the maximum number of tweets to retrieve from the timeline. Put whatever number between 1 and 1000 but 150 is good enough.
  • OFF: whether or not you want the prototype to expand the short URLs. If you put ON, you should be patient, it may take a while. Even if duplicate short URLs happen all the time, going with OFF gives a good approximation of which are the leading results. Unless a problem with Twitter, putting OFF should provide you the results within few seconds.

Only the top 20 first results will be printed.

If you want to play with the code

As the title suggests, that just few hundreds lines of (JAVA) code. As it is a toy project and to keep things simple I voluntarily didn’t use any DI framework like spring or guice and tried to use as less external libraries as possible unless necessary (even no log4j!). I did wrote a minimal amount of unit tests since I cannot code without it and I did use the google-collections library for the same reason :).

Also I tried to wrote at least a minimal amount of comments, in particular where I think the code should be improved a lot for better performance but remember: the prototype is of course not scalable as it does not rely on any indexing strategy (it computes the results at query time). Building a real a real search engine would at first involve building an index offline (using lucene for instance).

You’ll find the source code here prototype_src.zip.

If you are using maven and eclipse (or other popular IDE), you should be ready to go in less than a minute by unpacking the zip, typing “mvn eclipse:eclipse” and importing the existing project.

Some final remarks

What I wanted to prove here is mainly that without crawling a single webpage, you can answer to “hot queries” with a relevancy comparable to what you can find on google news or any “real time search engine”. This is made possible by judiciously using the tremendous power that twitter provide with its open API.

Of course building a real “real time search engine” would require much more than few hundred lines of code 🙂 and hundreds of features could be added to that prototype, but I would keep two core principles:

  • real time search results should be links and not micro blogging text like tweets. The text of some tweets can be relevant but as a secondary level of information.
  • let the “real time crowd” do the ranking for you. If a link is related in some way with your query and was highly and recently tweeted or digged (you name it), then there is a good chance that it will be a relevant “real time” result.

In that sense, among the dozens of real time search engines I have tested, my favorite one remains oneriot.

This is for the “pull” side of the things (when the user knows what to search for). I did not talk about the “push” side of the real time web here, probably in another post…

If you have issues running the prototype or any other question/remark, please shoot a comment.

So Intagra was developed and is now considered to generic levitra online be an aphrodisiac food is because the same nerves that regulate the lower back also deal with and regulate the intestines and how they function. How to Buy? The best way is to purchase levitra generic canada discover here. Erectile Dysfunction If you have buy viagra from india problems in getting or holding onto an erection. Normal fertility may be affected by treatment – The viagra no prescription http://deeprootsmag.org/2013/05/24/yes-we-have-no-reservations/ prostate is responsible for making semen, which carries sperm during ejaculation.