MENU
last data update: 2011/10/17, 21:11
Website loading time
during the test: 1.73 s
cable connection (average): 1.97 s
DSL connection (average): 2.2 s
modem (average): 14.64 s
HTTP headers
HTTP/1.0 200 OK
Content-Type: text/html; charset=UTF-8
Expires: Tue, 18 Oct 2011 04:11:21 GMT
Date: Tue, 18 Oct 2011 04:11:21 GMT
Cache-Control: private, max-age=0
Last-Modified: Sun, 16 Oct 2011 17:31:33 GMT
ETag: "14985a4f-1cd2-48a3-ae83-e4a8a0829e22"
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Server: GSE
Information about DNS servers
horicky.blogspot.com | CNAME | blogspot.l.google.com | IN | 3600 |
Received from the first DNS server
Request to the server "horicky.blogspot.com"
You used the following DNS server:
DNS Name: ns2.kotinet.com
DNS Server Address: 212.50.192.226#53
DNS server aliases:
HEADER opcode: REQUEST, status: NOERROR, id: 13173
flag: qr rd REQUEST: 1, ANSWER: 0, AUTHORITY: 4, ADDITIONAL: 4
REQUEST SECTION:
horicky.blogspot.com. IN ANY
AUTHORITY SECTION:
blogspot.com. 28739 IN NS ns2.google.com.
blogspot.com. 28739 IN NS ns3.google.com.
blogspot.com. 28739 IN NS ns4.google.com.
blogspot.com. 28739 IN NS ns1.google.com.
SECTION NOTES:
ns1.google.com. 205580 IN A 216.239.32.10
ns2.google.com. 205580 IN A 216.239.34.10
ns3.google.com. 205580 IN A 216.239.36.10
ns4.google.com. 205580 IN A 216.239.38.10
Received 181 bytes from address 212.50.192.226#53 in 137 ms
Received from the second DNS server
Request to the server "horicky.blogspot.com"
Received 38 bytes from address 82.141.108.26#53 in 143 ms
Request to the server "horicky.blogspot.com"
You used the following DNS server:
DNS Name: ns3.kotinet.com
DNS Server Address: 82.141.108.26#53
DNS server aliases:
Host horicky.blogspot.com not found: 5(REFUSED)
Received 38 bytes from address 82.141.108.26#53 in 142 ms
Subdomains (the first 50)
Typos (misspells)
goricky.blogspot.com boricky.blogspot.com noricky.blogspot.com joricky.blogspot.com uoricky.blogspot.com yoricky.blogspot.com hiricky.blogspot.com hkricky.blogspot.com hlricky.blogspot.com hpricky.blogspot.com h0ricky.blogspot.com h9ricky.blogspot.com hoeicky.blogspot.com hodicky.blogspot.com hoficky.blogspot.com hoticky.blogspot.com ho5icky.blogspot.com ho4icky.blogspot.com horucky.blogspot.com horjcky.blogspot.com | horkcky.blogspot.com horocky.blogspot.com hor9cky.blogspot.com hor8cky.blogspot.com horixky.blogspot.com horivky.blogspot.com horifky.blogspot.com horidky.blogspot.com horicjy.blogspot.com horicmy.blogspot.com horicly.blogspot.com horicoy.blogspot.com horiciy.blogspot.com horickt.blogspot.com horickg.blogspot.com horickh.blogspot.com horicku.blogspot.com horick7.blogspot.com horick6.blogspot.com oricky.blogspot.com | hricky.blogspot.com hoicky.blogspot.com horcky.blogspot.com horiky.blogspot.com horicy.blogspot.com horick.blogspot.com ohricky.blogspot.com hroicky.blogspot.com hoircky.blogspot.com horciky.blogspot.com horikcy.blogspot.com horicyk.blogspot.com hhoricky.blogspot.com hooricky.blogspot.com horricky.blogspot.com horiicky.blogspot.com horiccky.blogspot.com horickky.blogspot.com horickyy.blogspot.com |
Location
IP: 209.85.175.132
continent: NA, country: United States (USA), city: Mountain View
Website value
rank in the traffic statistics:
There is not enough data to estimate website value.
Basic information
website build using CSS
code weight: 90.4 KB
text per all code ratio: 51 %
title: Pragmatic Programming Techniques
description:
keywords:
encoding: UTF-8
language: en
Website code analysis
one word phrases repeated minimum three times
Phrase | Quantity |
---|---|
the | 122 |
to | 66 |
of | 54 |
is | 46 |
we | 45 |
and | 42 |
user | 33 |
that | 27 |
item | 25 |
can | 24 |
as | 17 |
be | 15 |
userX | 12 |
with | 12 |
space | 12 |
each | 12 |
matrix | 11 |
in | 11 |
by | 10 |
concept | 10 |
use | 10 |
has | 10 |
similarity | 10 |
rating | 10 |
an | 10 |
this | 9 |
The | 9 |
movies | 9 |
compute | 9 |
In | 9 |
if | 8 |
... | 8 |
users | 8 |
items | 8 |
at | 8 |
other | 8 |
set | 8 |
will | 7 |
interaction | 7 |
then | 7 |
into | 7 |
on | 7 |
all | 7 |
between | 7 |
number | 7 |
such | 6 |
idea | 6 |
same | 6 |
space. | 6 |
do | 6 |
computing | 6 |
from | 6 |
row | 6 |
user's | 5 |
function | 5 |
recommend | 5 |
them | 5 |
these | 5 |
are | 5 |
vector | 5 |
metadata | 5 |
top | 5 |
have | 5 |
tag | 5 |
need | 4 |
for | 4 |
Then | 4 |
how | 4 |
words, | 4 |
equivalent | 4 |
match | 4 |
they | 4 |
For | 4 |
similar | 4 |
also | 4 |
test | 4 |
one | 4 |
should | 4 |
cell | 4 |
We | 4 |
seen | 4 |
following | 4 |
itemA | 4 |
rate | 4 |
there | 4 |
group | 4 |
both | 4 |
map | 3 |
rows | 3 |
example, | 3 |
who | 3 |
SVD | 3 |
represents | 3 |
Notice | 3 |
know | 3 |
find | 3 |
product | 3 |
look | 3 |
dot | 3 |
recommender | 3 |
our | 3 |
more | 3 |
If | 3 |
value | 3 |
existing | 3 |
It | 3 |
And | 3 |
determine | 3 |
This | 3 |
association | 3 |
which | 3 |
model, | 3 |
To | 3 |
algorithm | 3 |
first | 3 |
follows | 3 |
it | 3 |
what | 3 |
cells | 3 |
column | 3 |
or | 3 |
rule | 3 |
their | 3 |
time, | 3 |
(or | 3 |
given | 3 |
itemY | 3 |
Now | 3 |
1, | 3 |
two word phrases repeated minimum three times
Phrase | Quantity |
---|---|
to the | 13 |
we can | 13 |
is to | 11 |
of the | 9 |
the user | 8 |
the item | 8 |
number of | 7 |
user and | 7 |
the concept | 6 |
the same | 6 |
compute the | 5 |
can be | 5 |
in the | 5 |
that we | 5 |
idea is | 5 |
the number | 5 |
at the | 5 |
to user | 5 |
of movies | 5 |
the similarity | 5 |
space to | 5 |
the test | 4 |
equivalent to | 4 |
the following | 4 |
need to | 4 |
movies that | 4 |
concept space | 4 |
The idea | 4 |
item space | 4 |
between user | 4 |
set of | 4 |
user space | 4 |
that is | 4 |
other words, | 4 |
an item | 4 |
In other | 4 |
the matrix | 4 |
the user's | 4 |
In this | 4 |
and then | 4 |
space and | 3 |
user to | 3 |
userX and | 3 |
is equivalent | 3 |
with the | 3 |
to be | 3 |
and the | 3 |
and item | 3 |
Notice that | 3 |
represents the | 3 |
dot product | 3 |
to determine | 3 |
the top | 3 |
For example, | 3 |
the cell | 3 |
we have | 3 |
concept space. | 3 |
as follows | 3 |
map the | 3 |
we know | 3 |
that the | 3 |
computing all | 3 |
to item | 3 |
If we | 3 |
there are | 3 |
we use | 3 |
the set | 3 |
can use | 3 |
rating on | 3 |
use the | 3 |
then compute | 3 |
this model, | 3 |
model, we | 3 |
group of | 3 |
such as | 3 |
into the | 3 |
item to | 3 |
do the | 3 |
three word phrases repeated minimum three times
Phrase | Quantity |
---|---|
the number of | 5 |
of movies that | 4 |
The idea is | 4 |
In other words, | 4 |
between user and | 4 |
idea is to | 4 |
the concept space. | 3 |
to the item | 3 |
is equivalent to | 3 |
the item space | 3 |
space to the | 3 |
user and item | 3 |
the set of | 3 |
this model, we | 3 |
user space to | 3 |
B tags
Now given all the metadata of user and item, as well as their interaction over time, can we answer the following questions ... What is the probability that userX purchase itemY ?What rating will userX give to itemY ?What is the top k unseen items that should be recommended to userX ?Content-based Approach In this approach, we make use of the metadata to categorize user and item and then match them at the category level. One example is to recommend jobs to candidates, we can do a IR/text search to match the user's resume with the job descriptions. Another example is to recommend an item that is "similar" to the one that the user has purchased. Similarity is measured according to the item's metadata and various distance function can be used. The goal is to find k nearest neighbors of the item we know the user likes. Collaborative Filtering Approach In this approach, we look purely at the interactions between user and item, and use that to perform our recommendation. The interaction data can be represented as a matrix. Notice that each cell represents the interaction between user and item. For example, the cell can contain the rating that user gives to the item (in the case the cell is a numeric value), or the cell can be just a binary value indicating whether the interaction between user and item has happened. (e.g. a "1" if userX has purchased itemY, and "0" otherwise. The matrix is also extremely sparse, meaning that most of the cells are unfilled. We need to be careful about how we treat these unfilled cells, there are 2 common ways ... Treat these unknown cells as "0". Make them equivalent to user giving a rate "0". This may or may not be a good idea depends on your application scenarios. Guess what the missing value should be. For example, to guess what userX will rate itemA given we know his has rate on itemB, we can look at all users (or those who is in the same age group of userX) who has rate both itemA and itemB, then compute an average rating from them. Use the average rating of itemA and itemB to interpolate userX's rating on itemA given his rating on itemB. User-based Collaboration Filter In this model, we do the following Find a group of users that is “similar” to user XFind all movies liked by this group that hasn’t been seen by user XRank these movies and recommend to user X This introduces the concept of user-to-user similarity, which is basically the similarity between 2 row vectors of the user/item matrix. To compute the K nearest neighbor of a particular users. A naive implementation is to compute the "similarity" for all other users and pick the top K. Different similarity functions can be used. Jaccard distance function is defined as the number of intersections of movies that both users has seen divided by the number of union of movies they both seen. Pearson similarity is first normalizing the user's rating and then compute the cosine distance. There are two problems with this approach Compare userX and userY is expensive as they have millions of attributesFind top k similar users to userX require computing all pairs of userX and userYLocation Sensitive Hashing and Minhash
It will be expensive to permute the rows if the number of rows is large. Remember that the purpose of h(c1) is to return row number of the first row that is 1. So we can scan each row of c1 to see if it is 1, if so we apply a function newRowNum = hash(rowNum) to simulate a permutation. Take the minimum of the newRowNum seen so far. As an optimization, instead of doing one column at a time, we can do it a row at the time, the algorithm is as follows To solve problem 2, we need to avoid computing all other users' similarity to userX. The idea is to hash users into buckets such that similar users will be fall into the same bucket. Therefore, instead of computing all users, we only compute the similarity of those users who is in the same bucket of userX. The idea is to horizontally partition the column into b bands, each with r rows. By pick the parameter b and r, we can control the likelihood (function of similarity) that they will fall into the same bucket in at least one band. Item-based Collaboration Filter If we transpose the user/item matrix and do the same thing, we can compute the item to item similarity. In this model, we do the following ... Find the set of movies that user X likes (from interaction data)Find a group of movies that is similar to these set of movies that we know user X likesRank these movies and recommend to user X It turns out that computing item-based collaboration filter has more benefit than computing user to user similarity for the following reasons ... Number of items typically smaller than number of usersWhile user's taste will change over time and hence the similarity matrix need to be updated more frequent, item to item similarity tends to be more stable and requires less update.Singular Value Decomposition If we look back at the matrix, we can see the matrix multiplication is equivalent to mapping an item from the item space to the user space. In other words, if we view each of the existing item as an axis in the user space (notice, each user is a vector of their rating on existing items), then multiplying a new item with the matrix gives the same vector like the user. So we can then compute a dot product with this projected new item with user to determine its similarity. It turns out that this is equivalent to map the user to the item space and compute a dot product there. In other words, multiply the matrix is equivalent to mapping between item space and user space. Now lets imagine there is a hidden concept space in between. Instead of jumping directly from user space to item space, we can think of jumping from user space to a concept space, and then to the item space. Notice that here we first map the user space to the concept space and also map the item space to the concept space. Then we match both user and item at the concept space. This is a generalization of our recommender. We can use SVD to factor the matrix into 2 parts. Let P be the m by n matrix (m rows and n columns). P = UDV where U is an m by m matrix, each column represents the eigenvectors of P*transpose(P). And V is an n by n matrix with each row represents the eigenvector of transpose(P)*P. D is a diagonal matrix containing eigenvalues of P*transpose(P), or transpose(P)*P. In other words, we can decompose P into U*squareroot(D) and squareroot(D)*V. Notice that D can be thought as the strength of each "concept" in the concept space. And the value is order in terms of their magnitude in decreasing order. If we remove some of the weakest concept by making them zero, we reduce the number of non-zero elements in D, which effective generalize the concept space (make them focus in the important concepts). Calculate SVD decomposition for matrix with large dimensions is expensive. Fortunately, if our goal is to compute an SVD approximation (with k diagonal non-zero value), we can use the random projection mechanism as describer here. Association Rule Based
We represent each user as a basket and each viewing as an item (notice that we ignore the rating and use a binary value). After that we use association rule mining algorithm to detect frequent item set and the association rules. Then for each user, we match the user's previous viewing items to the set of rules to determine what other movies should we recommend. Evaluate the recommender
After we have a recommender, how do we evaluate the performance of it ? The basic idea is to use separate the data into the training set and the test set. For the test set, we remove certain user-to-movies interaction (change certain cells from 1 to 0) and pretending the user hasn't seen the item. Then we use the training set to train a recommender and then fit the test set (with removed interaction) to the recommender. The performance is measured by how much overlap between the recommended items with the one that we have removed. In other words, a good recommender should be able to recover the set of items that we have removed from the test set. Leverage tagging information on items
U tags
I tags
images
file name | alternative text |
---|---|
P1.png | |
icon18_edit_allbkg.gif | My Photo |
P2.png | Powered by Blogger |
p1.png | |
P3.png | |
P4.png | |
10062317@N02.jpg?1184418980 | |
bloggerbutton1.gif |
headers
H1
H2
Thursday, September 1, 2011
Sunday, August 28, 2011
Saturday, July 9, 2011
Thursday, April 21, 2011
Saturday, March 19, 2011
Thursday, March 17, 2011
Sunday, December 5, 2010
About Me
Links
Previous Posts
Archives
H3
Thursday, September 1, 2011
Sunday, August 28, 2011
Saturday, July 9, 2011
Thursday, April 21, 2011
Saturday, March 19, 2011
Thursday, March 17, 2011
Sunday, December 5, 2010
About Me
Links
Previous Posts
Archives
H4
H5
H6
internal links
address | anchor text |
---|---|
http://horicky.blogspot.com/2011/09/recommendation-engine.html | 12:59 PM |
http://horicky.blogspot.com/2011/09/recommendation-engine.html#links | Links to this post |
http://horicky.blogspot.com/2011/08/scale-independently-in-cloud.html | 9:37 PM |
http://horicky.blogspot.com/2011/08/scale-independently-in-cloud.html#links | Links to this post |
http://horicky.blogspot.com/2011/07/fraud-detection-methods.html | 4:35 PM |
http://horicky.blogspot.com/2011/07/fraud-detection-methods.html#links | Links to this post |
http://horicky.blogspot.com/2011/04/k-means-clustering-in-map-reduce.html | 10:29 PM |
http://horicky.blogspot.com/2011/04/k-means-clustering-in-map-reduce.html#links | Links to this post |
http://horicky.blogspot.com/2011/03/compare-machine-learning-models-with.html | 6:47 PM |
http://horicky.blogspot.com/2011/03/compare-machine-learning-models-with.html#links | Links to this post |
http://horicky.blogspot.com/2009/11/machine-learning-with-linear-model.html | Linear and Logistic regression |
http://horicky.blogspot.com/2009/11/machine-learning-with-linear-model.html | Neural Network |
http://horicky.blogspot.com/2009/11/support-vector-machine.html | Support Vector Machine |
http://horicky.blogspot.com/2008/02/classification-via-decision-tree.html | Decision tree |
http://horicky.blogspot.com/search/label/data%20mining | data mining |
http://horicky.blogspot.com/search/label/machine%20learning | machine learning |
http://horicky.blogspot.com/search/label/predictive%20analytics | predictive analytics |
http://horicky.blogspot.com/2011/03/predictive-analytics-conference-2011.html | 10:59 PM |
http://horicky.blogspot.com/2011/03/predictive-analytics-conference-2011.html#links | Links to this post |
http://horicky.blogspot.com/2009/05/machine-learning-probabilistic-model.html | bayesian networks |
http://horicky.blogspot.com/2009/11/machine-learning-with-linear-model.html | linear regression |
http://horicky.blogspot.com/2009/11/machine-learning-with-linear-model.html | neural networks |
http://horicky.blogspot.com/2008/02/classification-via-decision-tree.html | decision trees |
http://horicky.blogspot.com/2009/11/support-vector-machine.html | support vector machines |
http://horicky.blogspot.com/2009/05/machine-learning-nearest-neighbor.html | nearest neighbors |
http://horicky.blogspot.com/2009/10/machine-learning-association-rule.html | association rules |
http://horicky.blogspot.com/2009/11/principal-component-analysis.html | principal component analysis |
http://horicky.blogspot.com/2008/11/hadoop-mapreduce-implementation.html | Hadoop, Map/Reduce |
http://horicky.blogspot.com/2010/08/designing-algorithmis-for-map-reduce.html | sequential algorithm can be restructured to run in map reduce |
http://horicky.blogspot.com/search/label/Business%20Intelligence | Business Intelligence |
http://horicky.blogspot.com/search/label/data%20mining | data mining |
http://horicky.blogspot.com/search/label/scalability | scalability |
http://horicky.blogspot.com/2010/12/bi-at-large-scale.html | 8:36 AM |
http://horicky.blogspot.com/2010/12/bi-at-large-scale.html#links | Links to this post |
http://horicky.blogspot.com/2011/09/recommendation-engine.html | Recommendation Engine |
http://horicky.blogspot.com/2011/08/scale-independently-in-cloud.html | Scale Independently in the Cloud |
http://horicky.blogspot.com/2011/07/fraud-detection-methods.html | Fraud Detection Methods |
http://horicky.blogspot.com/2011/04/k-means-clustering-in-map-reduce.html | K-Means Clustering in Map Reduce |
http://horicky.blogspot.com/2011/03/compare-machine-learning-models-with.html | Compare Machine Learning models with ROC Curve |
http://horicky.blogspot.com/2011/03/predictive-analytics-conference-2011.html | Predictive Analytics Conference 2011 |
http://horicky.blogspot.com/2010/12/bi-at-large-scale.html | BI at large scale |
http://horicky.blogspot.com/2010/11/map-reduce-and-stream-processing.html | Map Reduce and Stream Processing |
http://horicky.blogspot.com/2010/10/scalable-system-design-patterns.html | Scalable System Design Patterns |
http://horicky.blogspot.com/2010/10/bigtable-model-with-cassandra-and-hbase.html | BigTable Model with Cassandra and HBase |
http://horicky.blogspot.com/2007_10_01_archive.html | October 2007 |
http://horicky.blogspot.com/2007_11_01_archive.html | November 2007 |
http://horicky.blogspot.com/2007_12_01_archive.html | December 2007 |
http://horicky.blogspot.com/2008_01_01_archive.html | January 2008 |
http://horicky.blogspot.com/2008_02_01_archive.html | February 2008 |
http://horicky.blogspot.com/2008_03_01_archive.html | March 2008 |
http://horicky.blogspot.com/2008_04_01_archive.html | April 2008 |
http://horicky.blogspot.com/2008_05_01_archive.html | May 2008 |
http://horicky.blogspot.com/2008_06_01_archive.html | June 2008 |
http://horicky.blogspot.com/2008_07_01_archive.html | July 2008 |
http://horicky.blogspot.com/2008_08_01_archive.html | August 2008 |
http://horicky.blogspot.com/2008_10_01_archive.html | October 2008 |
http://horicky.blogspot.com/2008_11_01_archive.html | November 2008 |
http://horicky.blogspot.com/2008_12_01_archive.html | December 2008 |
http://horicky.blogspot.com/2009_01_01_archive.html | January 2009 |
http://horicky.blogspot.com/2009_04_01_archive.html | April 2009 |
http://horicky.blogspot.com/2009_05_01_archive.html | May 2009 |
http://horicky.blogspot.com/2009_07_01_archive.html | July 2009 |
http://horicky.blogspot.com/2009_08_01_archive.html | August 2009 |
http://horicky.blogspot.com/2009_09_01_archive.html | September 2009 |
http://horicky.blogspot.com/2009_10_01_archive.html | October 2009 |
http://horicky.blogspot.com/2009_11_01_archive.html | November 2009 |
http://horicky.blogspot.com/2009_12_01_archive.html | December 2009 |
http://horicky.blogspot.com/2010_01_01_archive.html | January 2010 |
http://horicky.blogspot.com/2010_02_01_archive.html | February 2010 |
http://horicky.blogspot.com/2010_03_01_archive.html | March 2010 |
http://horicky.blogspot.com/2010_05_01_archive.html | May 2010 |
http://horicky.blogspot.com/2010_06_01_archive.html | June 2010 |
http://horicky.blogspot.com/2010_07_01_archive.html | July 2010 |
http://horicky.blogspot.com/2010_08_01_archive.html | August 2010 |
http://horicky.blogspot.com/2010_10_01_archive.html | October 2010 |
http://horicky.blogspot.com/2010_11_01_archive.html | November 2010 |
http://horicky.blogspot.com/2010_12_01_archive.html | December 2010 |
http://horicky.blogspot.com/2011_03_01_archive.html | March 2011 |
http://horicky.blogspot.com/2011_04_01_archive.html | April 2011 |
http://horicky.blogspot.com/2011_07_01_archive.html | July 2011 |
http://horicky.blogspot.com/2011_08_01_archive.html | August 2011 |
http://horicky.blogspot.com/2011_09_01_archive.html | September 2011 |
http://horicky.blogspot.com/feeds/posts/default | Atom |
external links
address | anchor text |
---|---|
http://1.bp.blogspot.com/-5jJbvLcccrM/Tl_mi56QEtI/AAAAAAAAAkU/sHFcM2rT1Qk/s1600/P1.png | |
http://2.bp.blogspot.com/-g4Kek53agA8/Tl_s7r5mmMI/AAAAAAAAAkk/q43IJZ9mIsQ/s1600/P1.png | |
http://4.bp.blogspot.com/-06YIrYjJ1m4/TmBnFznqoxI/AAAAAAAAAks/HD2vhDWetdg/s1600/P1.png | |
http://2.bp.blogspot.com/-NvaT7CYf9dk/TmHEHmZwuYI/AAAAAAAAAlc/y9CpjWVepIQ/s1600/P1.png | |
http://1.bp.blogspot.com/-6lGM-gmXbRo/TmG4MBZNoDI/AAAAAAAAAlM/pekmpz2n2RI/s1600/P1.png | |
http://3.bp.blogspot.com/-EwZkU5S95HU/TmHC_xlsGhI/AAAAAAAAAlU/f7rt6A0iLl0/s1600/P1.png | |
http://2.bp.blogspot.com/-YEEM5PYuTAI/TmHGPmDYICI/AAAAAAAAAls/8VIm0-7PyYM/s1600/P1.png | |
http://4.bp.blogspot.com/-y9btoLjP_iM/TmJQUkWh6cI/AAAAAAAAAl0/gC-yWmMYTmU/s1600/P1.png | |
http://4.bp.blogspot.com/-aewfHUe898c/TmJQ7T3SwTI/AAAAAAAAAl8/EyXh1KEKMTg/s1600/P1.png | |
http://www.stanford.edu/group/mmds/slides2010/Martinsson.pdf | random projection mechanism as describer here |
http://www.blogger.com/comment.g?blogID=7994087232040033267&postID=5274453680574951921 | 2 Comments |
http://www.blogger.com/post-edit.g?blogID=7994087232040033267&postID=5274453680574951921&from=pencil | |
http://2.bp.blogspot.com/-eMBozepwj94/TlsapTefHLI/AAAAAAAAAkE/jZVl_c5RFro/s1600/P1.png | |
http://4.bp.blogspot.com/-VhH_yZzf4Hs/TlsfEEBG9DI/AAAAAAAAAkM/Rpkj1ElghnQ/s1600/P1.png | |
http://www.blogger.com/comment.g?blogID=7994087232040033267&postID=5328332568332849178 | 1 Comments |
http://www.blogger.com/post-edit.g?blogID=7994087232040033267&postID=5328332568332849178&from=pencil | |
http://4.bp.blogspot.com/-Qv9-KM_kVws/Thjq1yuCtmI/AAAAAAAAAj8/AU0w-Kg-jcA/s1600/P1.png | |
http://www.blogger.com/comment.g?blogID=7994087232040033267&postID=2924746439738554798 | 0 Comments |
http://www.blogger.com/post-edit.g?blogID=7994087232040033267&postID=2924746439738554798&from=pencil | |
http://3.bp.blogspot.com/-cAYr3kBEFlQ/TbEopL57W_I/AAAAAAAAAjY/MYYMGBJ6UYI/s1600/P1.png | |
http://2.bp.blogspot.com/-CPemjTn74Lw/TbGlvWyRe3I/AAAAAAAAAjg/KEqmSN6v594/s1600/P1.png | |
http://2.bp.blogspot.com/-S9Ty4WNwk5I/TbIBNL-d2_I/AAAAAAAAAjo/MOPga4NmfDA/s1600/P1.png | |
http://2.bp.blogspot.com/-2cTcbzx3cZQ/TbIBcAsF7WI/AAAAAAAAAjw/Qs5SPSWMXds/s1600/P2.png | |
http://www.blogger.com/comment.g?blogID=7994087232040033267&postID=2400415893429084182 | 3 Comments |
http://www.blogger.com/post-edit.g?blogID=7994087232040033267&postID=2400415893429084182&from=pencil | |
http://4.bp.blogspot.com/-6y4jktKu-YM/TYVhQ1ShJ6I/AAAAAAAAAiw/hcxHH9y8SPk/s1600/p1.png | |
http://2.bp.blogspot.com/-NYD2KAx6pg0/TYVu2mKU66I/AAAAAAAAAi4/gK0IpQYznZ0/s1600/P2.png | |
http://2.bp.blogspot.com/-Nz9gIYM512A/TYVzLE8adMI/AAAAAAAAAjI/usbJWYLeIuw/s1600/P3.png | |
http://4.bp.blogspot.com/-1PvjF2Pq4mQ/TYV2ouqSwJI/AAAAAAAAAjQ/KHmQqQdAyMA/s1600/P4.png | |
http://www.blogger.com/comment.g?blogID=7994087232040033267&postID=3187787432347206329 | 2 Comments |
http://www.blogger.com/post-edit.g?blogID=7994087232040033267&postID=3187787432347206329&from=pencil | |
http://research.google.com/pubs/archive/36296.pdf | a good paper on their PLANET project |
http://www.blogger.com/comment.g?blogID=7994087232040033267&postID=4965787605309153381 | 0 Comments |
http://www.blogger.com/post-edit.g?blogID=7994087232040033267&postID=4965787605309153381&from=pencil | |
http://4.bp.blogspot.com/_j6mB7TMmJJY/TPvSTN2OV7I/AAAAAAAAAiQ/EG6_m6lWkTk/s1600/p1.png | |
http://www.cs.stanford.edu/people/ang//papers/nips06-mapreducemulticore.pdf | a big portion of machine learning algorithm |
http://mahout.apache.org/ | Apache Mahout project |
https://cwiki.apache.org/confluence/display/MAHOUT/Algorithms | implemented an impressive list of algorithms |
http://3.bp.blogspot.com/_j6mB7TMmJJY/TPybnJqgcNI/AAAAAAAAAig/RnZsobzCvag/s1600/P2.png | |
http://3.bp.blogspot.com/_j6mB7TMmJJY/TPya7eHvCDI/AAAAAAAAAiY/0a4cdlX-Acg/s1600/P2.png | |
http://www.blogger.com/comment.g?blogID=7994087232040033267&postID=1394175011370671044 | 4 Comments |
http://www.blogger.com/post-edit.g?blogID=7994087232040033267&postID=1394175011370671044&from=pencil | |
http://www.blogger.com/profile/03793674536997651667 | My Photo |
http://www.blogger.com/profile/03793674536997651667 | Ricky Ho |
http://www.blogger.com/profile/03793674536997651667 | View my complete profile |
http://news.google.com/ | Google News |
http://help.blogger.com/bin/answer.py?answer=41427 | Edit-Me |
http://help.blogger.com/bin/answer.py?answer=41427 | Edit-Me |
http://www.blogger.com | Powered by Blogger |