Advanced Computer Network (Record no. 44786)
[ view plain ]
000 -LEADER | |
---|---|
fixed length control field | 09872nam a22001577a 4500 |
003 - CONTROL NUMBER IDENTIFIER | |
control field | OSt |
020 ## - INTERNATIONAL STANDARD BOOK NUMBER | |
International Standard Book Number | 9789350040133 |
082 ## - DEWEY DECIMAL CLASSIFICATION NUMBER | |
Classification number | 005.8 ADV-A |
100 ## - MAIN ENTRY--PERSONAL NAME | |
Personal name | Ambawade, D |
245 ## - TITLE STATEMENT | |
Title | Advanced Computer Network |
260 ## - PUBLICATION, DISTRIBUTION, ETC. (IMPRINT) | |
Place of publication, distribution, etc | New Delhi |
Name of publisher, distributor, etc | Dreamtech Press |
Date of publication, distribution, etc | 2013 |
300 ## - PHYSICAL DESCRIPTION | |
Extent | 376p. |
500 ## - GENERAL NOTE | |
General note | PART 1CLASSIFICATION ...................................................1<br/>1 Machine learning basics 3<br/>1.1 What is machine learning? 5<br/>Sensors and the data deluge 6 ■ Machine learning will be more<br/>important in the future 7<br/>1.2 Key terminology 7<br/>1.3 Key tasks of machine learning 10<br/>1.4 How to choose the right algorithm 11<br/>1.5 Steps in developing a machine learning application 11<br/>1.6 Why Python? 13<br/>Executable pseudo-code 13 ■ Python is popular 13 ■ What<br/>Python has that other languages don’t have 14 ■ Drawbacks 14<br/>1.7 Getting started with the NumPy library 15<br/>1.8 Summary 17<br/><br/>Licensed to Arti Pandey <arti.pandey@dreamtechpress.com><br/><br/>x CONTENTS<br/>2 Classifying with k-Nearest Neighbors 18<br/>2.1 Classifying with distance measurements 19<br/>Prepare: importing data with Python 21 ■ Putting the kNN classification<br/>algorithm into action 23 ■ How to test a classifier 24<br/>2.2 Example: improving matches from a dating site with kNN 24<br/>Prepare: parsing data from a text file 25 ■ Analyze: creating scatter plots<br/>with Matplotlib 27 ■ Prepare: normalizing numeric values 29 ■ Test:<br/>testing the classifier as a whole program 31 ■ Use: putting together a<br/>useful system 32<br/>2.3 Example: a handwriting recognition system 33<br/>Prepare: converting images into test vectors 33 ■ Test: kNN on<br/>handwritten digits 35<br/>2.4 Summary 36<br/>3 Splitting datasets one feature at a time: decision trees 37<br/>3.1 Tree construction 39<br/>Information gain 40 ■ Splitting the dataset 43 ■ Recursively<br/>building the tree 46<br/>3.2 Plotting trees in Python with Matplotlib annotations 48<br/>Matplotlib annotations 49 ■ Constructing a tree of annotations 51<br/>3.3 Testing and storing the classifier 56<br/>Test: using the tree for classification 56 ■ Use: persisting the<br/>decision tree 57<br/>3.4 Example: using decision trees to predict contact lens type 57<br/>3.5 Summary 59<br/>4 Classifying with probability theory: naïve Bayes 61<br/>4.1 Classifying with Bayesian decision theory 62<br/>4.2 Conditional probability 63<br/>4.3 Classifying with conditional probabilities 65<br/>4.4 Document classification with naïve Bayes 65<br/>4.5 Classifying text with Python 67<br/>Prepare: making word vectors from text 67 ■ Train: calculating<br/><br/>probabilities from word vectors 69 ■ Test: modifying the classifier for real-<br/>world conditions 71 ■ Prepare: the bag-of-words document model 73<br/><br/>4.6 Example: classifying spam email with naïve Bayes 74<br/>Prepare: tokenizing text 74 ■ Test: cross validation with naïve Bayes 75<br/><br/>Licensed to Arti Pandey <arti.pandey@dreamtechpress.com><br/><br/>CONTENTS xi<br/>4.7 Example: using naïve Bayes to reveal local attitudes from<br/>personal ads 77<br/>Collect: importing RSS feeds 78 ■ Analyze: displaying locally used<br/>words 80<br/>4.8 Summary 82<br/>5 Logistic regression 83<br/>5.1 Classification with logistic regression and the sigmoid<br/>function: a tractable step function 84<br/>5.2 Using optimization to find the best regression coefficients 86<br/>Gradient ascent 86 ■ Train: using gradient ascent to find the best<br/>parameters 88 ■ Analyze: plotting the decision boundary 90<br/>Train: stochastic gradient ascent 91<br/>5.3 Example: estimating horse fatalities from colic 96<br/>Prepare: dealing with missing values in the data 97 ■ Test:<br/>classifying with logistic regression 98<br/>5.4 Summary 100<br/>6 Support vector machines 101<br/>6.1 Separating data with the maximum margin 102<br/>6.2 Finding the maximum margin 104<br/>Framing the optimization problem in terms of our classifier 104<br/>Approaching SVMs with our general framework 106<br/>6.3 Efficient optimization with the SMO algorithm 106<br/>Platt’s SMO algorithm 106 ■ Solving small datasets with the<br/>simplified SMO 107<br/>6.4 Speeding up optimization with the full Platt SMO 112<br/>6.5 Using kernels for more complex data 118<br/>Mapping data to higher dimensions with kernels 118 ■ The radial<br/>bias function as a kernel 119 ■ Using a kernel for testing 122<br/>6.6 Example: revisiting handwriting classification 125<br/>6.7 Summary 127<br/>7 Improving classification with the AdaBoost meta-algorithm 129<br/>7.1 Classifiers using multiple samples of the dataset 130<br/>Building classifiers from randomly resampled data: bagging 130<br/>Boosting 131<br/>7.2 Train: improving the classifier by focusing on errors 131<br/><br/>Licensed to Arti Pandey <arti.pandey@dreamtechpress.com><br/><br/>xii CONTENTS<br/><br/>7.3 Creating a weak learner with a decision stump 133<br/>7.4 Implementing the full AdaBoost algorithm 136<br/>7.5 Test: classifying with AdaBoost 139<br/>7.6 Example: AdaBoost on a difficult dataset 140<br/>7.7 Classification imbalance 142<br/>Alternative performance metrics: precision, recall, and ROC 143<br/>Manipulating the classifier’s decision with a cost function 147<br/>Data sampling for dealing with classification imbalance 148<br/>7.8 Summary 148<br/><br/>PART 2FORECASTING NUMERIC VALUES WITH REGRESSION .151<br/>8 Predicting numeric values: regression 153<br/>8.1 Finding best-fit lines with linear regression 154<br/>8.2 Locally weighted linear regression 160<br/>8.3 Example: predicting the age of an abalone 163<br/>8.4 Shrinking coefficients to understand our data 164<br/>Ridge regression 164 ■ The lasso 167 ■ Forward stagewise<br/>regression 167<br/>8.5 The bias/variance tradeoff 170<br/>8.6 Example: forecasting the price of LEGO sets 172<br/>Collect: using the Google shopping API 173 ■ Train: building a model 174<br/>8.7 Summary 177<br/>9 Tree-based regression 179<br/>9.1 Locally modeling complex data 180<br/>9.2 Building trees with continuous and discrete features 181<br/>9.3 Using CART for regression 184<br/>Building the tree 184 ■ Executing the code 186<br/>9.4 Tree pruning 188<br/>Prepruning 188 ■ Postpruning 190<br/>9.5 Model trees 192<br/>9.6 Example: comparing tree methods to standard regression 195<br/>9.7 Using Tkinter to create a GUI in Python 198<br/>Building a GUI in Tkinter 199 ■ Interfacing Matplotlib and Tkinter 201<br/>9.8 Summary 203<br/><br/>Licensed to Arti Pandey <arti.pandey@dreamtechpress.com><br/><br/>CONTENTS xiii<br/>PART 3UNSUPERVISED LEARNING ..................................205<br/>10 Grouping unlabeled items using k-means clustering 207<br/>10.1 The k-means clustering algorithm 208<br/>10.2 Improving cluster performance with postprocessing 213<br/>10.3 Bisecting k-means 214<br/>10.4 Example: clustering points on a map 217<br/>The Yahoo! PlaceFinder API 218 ■ Clustering geographic<br/>coordinates 220<br/>10.5 Summary 223<br/>11 Association analysis with the Apriori algorithm 224<br/>11.1 Association analysis 225<br/>11.2 The Apriori principle 226<br/>11.3 Finding frequent itemsets with the Apriori algorithm 228<br/>Generating candidate itemsets 229 ■ Putting together the full<br/>Apriori algorithm 231<br/>11.4 Mining association rules from frequent item sets 233<br/>11.5 Example: uncovering patterns in congressional voting 237<br/>Collect: build a transaction data set of congressional voting<br/>records 238 ■ Test: association rules from congressional voting<br/>records 243<br/>11.6 Example: finding similar features in poisonous<br/>mushrooms 245<br/>11.7 Summary 246<br/>12 Efficiently finding frequent itemsets with FP-growth 248<br/>12.1 FP-trees: an efficient way to encode a dataset 249<br/>12.2 Build an FP-tree 251<br/>Creating the FP-tree data structure 251 ■ Constructing the FP-tree 252<br/>12.3 Mining frequent items from an FP-tree 256<br/>Extracting conditional pattern bases 257 ■ Creating conditional<br/>FP-trees 258<br/>12.4 Example: finding co-occurring words in a Twitter feed 260<br/>12.5 Example: mining a clickstream from a news site 264<br/>12.6 Summary 265<br/><br/>Licensed to Arti Pandey <arti.pandey@dreamtechpress.com><br/><br/>xiv CONTENTS<br/>PART 4ADDITIONAL TOOLS ..........................................267<br/>13 Using principal component analysis to simplify data 269<br/>13.1 Dimensionality reduction techniques 270<br/>13.2 Principal component analysis 271<br/>Moving the coordinate axes 271 ■ Performing PCA in NumPy 273<br/>13.3 Example: using PCA to reduce the dimensionality of<br/>semiconductor manufacturing data 275<br/>13.4 Summary 278<br/>14 Simplifying data with the singular value decomposition 280<br/>14.1 Applications of the SVD 281<br/>Latent semantic indexing 281 ■ Recommendation systems 282<br/>14.2 Matrix factorization 283<br/>14.3 SVD in Python 284<br/>14.4 Collaborative filtering–based recommendation engines 286<br/>Measuring similarity 287 ■ Item-based or user-based similarity? 289<br/>Evaluating recommendation engines 289<br/>14.5 Example: a restaurant dish recommendation engine 290<br/>Recommending untasted dishes 290 ■ Improving recommendations with<br/>the SVD 292 ■ Challenges with building recommendation engines 295<br/>14.6 Example: image compression with the SVD 295<br/>14.7 Summary 298<br/>15 Big data and MapReduce 299<br/>15.1 MapReduce: a framework for distributed computing 300<br/>15.2 Hadoop Streaming 302<br/>Distributed mean and variance mapper 303 ■ Distributed mean<br/>and variance reducer 304<br/>15.3 Running Hadoop jobs on Amazon Web Services 305<br/>Services available on AWS 305 ■ Getting started with Amazon<br/>Web Services 306 ■ Running a Hadoop job on EMR 307<br/>15.4 Machine learning in MapReduce 312<br/>15.5 Using mrjob to automate MapReduce in Python 313<br/>Using mrjob for seamless integration with EMR 313 ■ The anatomy of a<br/>MapReduce script in mrjob 314<br/><br/>Licensed to Arti Pandey <arti.pandey@dreamtechpress.com><br/><br/>CONTENTS xv<br/>15.6 Example: the Pegasos algorithm for distributed SVMs 316<br/>The Pegasos algorithm 317 ■ Training: MapReduce support<br/>vector machines with mrjob 318<br/>15.7 Do you really need MapReduce? 322<br/>15.8 Summary 323<br/>appendix A Getting started with Python 325<br/>appendix B Linear algebra 335<br/>appendix C Probability refresher 341<br/>appendix D Resources 345<br/>index 347 |
901 ## - LOCAL DATA ELEMENT A, LDA (RLIN) | |
Acc. No. | 29314 |
942 ## - ADDED ENTRY ELEMENTS (KOHA) | |
Source of classification or shelving scheme | Dewey Decimal Classification |
Koha item type | Books |
Withdrawn status | Lost status | Source of classification or shelving scheme | Damaged status | Not for loan | Collection code | Home library | Current library | Shelving location | Date acquired | Source of acquisition | Cost, normal purchase price | Inventory number | Total Checkouts | Full call number | Barcode | Date last seen | Uniform Resource Identifier | Price effective from | Koha item type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Dewey Decimal Classification | Not For Loan | Reference | Amity Central Library | Amity Central Library | ASET ECE | 23/10/2019 | SBA | 429.00 | SBA / 12505 26/08/2019 | 005.8 ADV-A | 29314 | 23/10/2019 | https://epgp.inflibnet.ac.in/Home/Download | 23/10/2019 | Reference Book |