I was using custom jar for my mapreduce job in the past few years, and because it’s pure java programming, I have a lot of flexibility. But writing java results in a lot of code to maintain, and most of the mapreduce jobs are just joining with a little spice in it, so moving to Hive may be a better path.
The mapreduce job I face here is to left outer join two different datasets using the same keys, because it’s a outer join, there will be null values, and for these null values, I want to lookup the default values to assign from a map.
For example, I have two datasets:
dataset 1: KEY_ID CITY SIZE_TYPE
dataset 2: KEY_ID POPULATION
I tried to setup L2TP VPN on ubuntu 10.10 using xl2tpd, I installed xl2tpd from repository first:
apt-get install xl2tpd, which gave me the version 1.2.6.
I set ip range but when I tried to connect to the VPN server, the remote ip was always 0.0.0.0 (I checkeded the /etc/log/syslog). After searching for a while, I found it’s actually a bug in xl2tpd. This bug is fixed in later version.
《To the Moon》是那种你第一眼看上去会嗤之以鼻的游戏，简单略显粗糙的画面让你觉得至少回到了十年前。但正是这样一款70MB、16bit画面的游戏，被Gamespot评为2011年最佳剧情独立游戏。我想这样一款外表平凡的游戏能获此殊荣，一定有特别过人之处，再加上评论说游戏音轨特别的赞，于是一冲动就去官网花12刀买了正版，支持一下原创独立游戏。
DistributedCache is a very useful Hadoop feature that enables you to pass resource files to each mapper or reducer.
For example, you have a file stopWordList.txt that contains all the stop words you want to exclude when you do word count. And In your reducer, you want to check each value passed by mapper, if the value appears in the stop word list, we pass it and goes to the next value.
In order to use DistributedCache, first you need to set the file in the job configuration driver:
Related post is a very useful function which can keep the visitors surfing on your website based on the post tags.
You can install plugins to do this, or you can simply add the following codes in the single.php
Hive is a data warehouse system for Hadoop that facilitates easy data summarization, ad-hoc queries, and the analysis of large datasets stored in Hadoop compatible file systems.
In short, you can run a Hadoop MapReduce using SQL-like statements with Hive.
Here is an WordCount example I did using Hive. The example first shows how to do it on your Local machine, then I will show how to do it using Amazon EMR.
1. Install Hive.
First you need to install Hadoop on your local, here is a post for how to do it. After you installed Hadoop, you can use this official tutorial.
I used to use php to handle ajax request, which is pretty easy. Using Django framework is a little complex, here is a simple example, and I’m sure there are other ways.
1. In your views.py:
(1) include the following lines
from json import dumps
from django.views.decorators.csrf import csrf_exempt
编剧: 荒木经惟 / 岩松了
主演: 竹中直人 / 中山美穗 / 松隆子 / 浅野忠信 / 森田芳光 / 冢本晋也 / 三浦友和