Mining of Massive Datasets
At the highest level of description, this book is about data mining. However, it focuses on data mining of very large amounts of data, that is, data so large it does not fit in main memory. Because of the emphasis on size, many of our examples are about the Web or data derived from the Web. Further, the book takes an algorithmic point of view: data mining is about applying algorithms to data, rather than using data to ‘train’ a machine-learning engine of some sort. The principal topics covered are:
- Distributed file systems and map-reduce as a tool for creating parallel algorithms that succeed on very large amounts of data.
- Similarity search, including the key techniques of minhashing and localitysensitive hashing.
- Data-stream processing and specialized algorithms for dealing with data that arrives so fast it must be processed immediately or lost.
- The technology of search engines, including Google’s PageRank, link-spam detection, and the hubs-and-authorities approach.
- Frequent-itemset mining, including association rules, market-baskets, the A-Priori Algorithm and its improvements.
- Algorithms for clustering very large, high-dimensional datasets.
- Two key problems for Web applications: managing advertising and recommendation systems.
- Algorithms for analyzing and mining the structure of very large graphs, especially social-network graphs.
- Techniques for obtaining the important properties of a large dataset by dimensionality reduction, including singular-value decomposition and latent semantic indexing.
- Machine-learning algorithms that can be applied to very large data, such as perceptrons, support-vector machines, and gradient descent.