[ 收藏 ] [ 繁体中文 ]  
臺灣貨到付款、ATM、超商、信用卡PAYPAL付款,4-7個工作日送達,999元臺幣免運費   在線留言 商品價格為新臺幣 
首頁 電影 連續劇 音樂 圖書 女裝 男裝 童裝 內衣 百貨家居 包包 女鞋 男鞋 童鞋 計算機周邊

商品搜索

 类 别:
 关键字:
    

商品分类

  • 新类目

     管理
     投资理财
     经济
     社会科学
  • 信息時代的計算機科學理論(英文版)/交大致遠教材繫列
    該商品所屬分類:圖書 ->
    【市場價】
    320-464
    【優惠價】
    200-290
    【作者】 約翰·霍普克羅夫特拉文德蘭·坎南 
    【出版社】上海交大 
    【ISBN】9787313096098
    【折扣說明】一次購物滿999元台幣免運費+贈品
    一次購物滿2000元台幣95折+免運費+贈品
    一次購物滿3000元台幣92折+免運費+贈品
    一次購物滿4000元台幣88折+免運費+贈品
    【本期贈品】①優質無紡布環保袋,做工棒!②品牌簽字筆 ③品牌手帕紙巾
    版本正版全新電子版PDF檔
    您已选择: 正版全新
    溫馨提示:如果有多種選項,請先選擇再點擊加入購物車。
    *. 電子圖書價格是0.69折,例如了得網價格是100元,電子書pdf的價格則是69元。
    *. 購買電子書不支持貨到付款,購買時選擇atm或者超商、PayPal付款。付款後1-24小時內通過郵件傳輸給您。
    *. 如果收到的電子書不滿意,可以聯絡我們退款。謝謝。
    內容介紹



    出版社:上海交大
    ISBN:9787313096098
    商品編碼:1029314602

    開本:16
    出版時間:2013-05-01

    代碼:35
    作者:約翰·霍普克羅夫特,拉文德蘭·坎南

        
        
    "

    基本信息

    • 商品名稱:信息時代的計算機科學理論(英文版)/交大致遠教材繫列
    • 作者:(美)約翰·霍普克羅夫特//拉文德蘭·坎南
    • 代碼:35
    • 出版社:上海交大
    • ISBN號:9787313096098

    其他參考信息

    • 出版時間:2013-05-01
    • 印刷時間:2013-05-01
    • 版次:1
    • 印次:1
    • 開本:16開
    • 包裝:平裝
    • 頁數:386
    • 字數:620千字

    編輯推薦語

    《信息時代的計算機科學理論(英文版)》是上海交通大學致遠教材繫列之一,由國際**計算機科學家約翰·霍普克羅夫特教授和拉文德蘭·坎南教授編寫。本書包含了高維空間、隨機圖、奇異值分解、隨機行走和馬爾可夫鏈、學習算法和VC維、大規模數據問題的算法、聚類、圖形模型和置信傳播等主要內容,書後有附錄及索引。從第2章開始,每章後面均附有適量的練習題。 本書可作為計算機及相關專業高年級本科生或研究生的教材,也可供相關專業技術人員參考。

    內容提要

    《信息時代的計算機科學理論(英文版)》是交大 致遠教材繫列之一,由約翰·霍普克羅夫特編著。
         《信息時代的計算機科學理論(英文版)》簡介: Computer Science Theory for the Information Age covers the computer science theory likely to be useful in the next 40 years, including high- dimensional space, random graphs, singular value decomposition. random walks, Markov chains, learning algorithms, VC-dimension, algorithms for massive date problems, clustering. The book also covers graphical models and belief propagation, ranking and voting, sparse vectors, and compressed sensing. The book is intended for either an undergraduate or a graduate theory course in computer science. Prof. John Hopcroft is a world-renowned scientist and an expert on education in computer science. He was awarded the A. M. Turing Award in 1986 for his contributions in theoretical computing and data structure design. Dr. Ravindran Kannan is a principal researcher with Microsoft Research Labs located in India.

    目錄

    1 Introduction
    2 High-Dimensional Space
    2.1 Properties of High-Dimensional Space
    2.2 The High-Dimensional Sphere
    2.2.1 The Sphere and the Cube in Higher Dimensions
    2.2.2 Volume and Surface Area of the Unit Sphere
    2.2.3 The Volume is Near the Equator
    2.2.4 The Volume is in a Narrow Annulus
    2.2.5 The Surface Area is Near the Equator
    2.3 Volumes of Other Solids
    2.4 Generating Points Uniformly at Random on the Surface of a Sphere
    2.5 Gaussians in High Dimension
    2.6 Bounds on Tail Probability
    2.7 Random Projection and the Johnson-Lindenstrauss Theorem
    2.8 Bibliographic Notes
    2.9 Exercises
    3 Random Graphs
    3.1 TheG(n, p) Model
    3.1.1 Degree Distribution
    3.1.2 Existence of Triangles in G ( n, d/n )
    3.2 Phase Transitions
    3.3 The Giant Component
    3.4 Branching Processes
    3.5 Cycles and Full Connectivity
    3.5.1 Emergence of Cycles
    3.5.2 Full Connectivity
    3.5.3 Threshold for O (Inn) Diameter
    3.6 Phase Transitions for Monotone Properties
    3.7 Phase Transitions for CNF-sat
    3.8 Nonuniform and Growth Models of Random Graphs
    3.8.1 Nonuniform Models
    3.8.2 Giant Component in Random Graphs with Given Degree Distribution ...
    3.9 Growth Models
    3.9.1 Growth Model Without Preferential Attachment
    3.9.2 A Growth Model with Preferential Attachment
    3.10 Small World Graphs
    3.11 Bibliographic Notes
    3.12 Exercises
    4 Singular Value Decomposition (SVD)
    4.1 Singular Vectors
    4.2 Singular Value Decomposition (SVD)
    4.3 Best Rank k Approximations
    4.4 Power Method for Computing the Singular Value Decomposition
    4.5 Applications of Singular Value Decomposition
    4.5.1 Principal Component Analysis
    4.5.2 Clustering a Mixture of Spherical Gaussians
    4.5.3 An Application of SVD to a Discrete Optimization Problem
    4.5.4 Spectral Decomposition
    4.5.5 Singular Vectors and Ranking Documents
    4.6 Bibliographic Notes
    4.7 Exercises
    5 Random Walks and Markov Chains
    5.1 Stationary Distribution
    5.2 Electrical Networks and Random Walks
    5.3 Random Walks on Undirected Graphs with Unit Edge Weights
    5.4 Random Walks in Euclidean Space
    5.5 The Web as a Markov Chain
    5.6 Markov Chain Monte Carlo
    5.6.1 Metropolis-Hasting Algorithm
    5.6.2 Gibbs Sampling
    5.7 Convergence of Random Walks on Undirected Graphs
    5.7.1 Using Normalized Conductance to Prove Convergence
    5.8 Bibliographic Notes
    5.9 Exercises
    6 Learning and VC-Dimension
    6.1 Learning
    6.2 Linear Separators, the Perceptron Algorithm, and Margins
    6.3 Nonlinear Separators, Support Vector Machines, and Kernels
    6.4 Strong and Weak Learning-Boosting
    6.5 Number of Examples Needed for Prediction: VC-Dimension
    6.6 Vapnik-Chervonenkis or VC-Dimension
    6.6.1 Examples of Set Systems and Their VC-Dimension
    6.6.2 The Shatter Function
    6.6.3 Shatter Function for Set Systems of Bounded VC-Dimension
    6.6.4 Intersection Systems
    6.7 The VC Theorem
    6.8 Bibliographic Notes
    6.9 Exercises
    7 Algorithms for Massive Data Problems
    7.1 Frequency Moments of Data Streams
    7.1.1 Number of Distinct Elements in a Data Stream
    7.1.2 Counting the Number of Occurrences of a Given Element
    7.1.3 Counting Frequent Elements
    7.1.4 The Second Moment
    7.2 Sketch of a Large Matrix
    7.2.1 Matrix Multiplication Using Sampling
    7.2.2 Approximating a Matrix with a Sample of Rows and Columns ...
    7.3 Sketches of Documents
    7.4 Exercises
    8 Clustering
    8.1 Some Clustering Examples
    8.2 A Simple Greedy Algorithm for k-clustering
    8.3 Lloyd's Algorithm for k-means Clustering
    8.4 Meaningful Clustering via Singular Value Decomposition
    8.5 Recursive Clustering Based on Sparse Cuts
    8.6 Kernel Methods
    8.7 Agglomerative Clustering
    8.8 Communities, Dense Submatrices
    8.9 Flow Methods
    8.10 Linear Programming Formulation
    8.11 Finding a Local Cluster Without Examining the Whole Graph
    8.12 Axioms for Clustering
    8.12.1 An Impossibility Result
    8.12.2 A Satisfiable Set of Axioms
    8.13 Exercises
    9 Graphical Models and Belief Propagation
    9.1 Bayesian or Belief Networks
    9.2 Markov Random Fields
    9.3 Factor Graphs
    9.4 Tree Algorithms
    9.5 Message Passing Algorithm
    9.6 Graphs with a Single Cycle
    9.7 Belief Update in Networks with a Single Loop
    9.8 Maximum Weight Matching
    9.9 Warning Propagation
    9.10 Correlation Between Variables
    9.11 Exercises
    10 Other Topics
    10.1 Rankings
    10.2 Hare System for Voting
    10.3 Compressed Sensing and Sparse Vectors
    10.3.1 Unique Reconstruction of a Sparse Vector
    10.3.2 The Exact Reconstruction Property
    10.3.3 Restricted Isometry Property
    10.4 Applications
    10.4.1 Sparse Vector in Some Coordinate Basis
    10.4.2 A Representation Cannot be Sparse in Both Time and Frequency Domains
    10.4.3 Biological
    10.4.4 Finding Overlapping Cliques or Communities
    10.4.5 Low Rank Matrices
    10.5 Exercises
    11 Appendix
    11.1 Asymptotic Notation
    11.2 Useful Inequalities
    11.3 Sums of Series
    11.4 Probability
    11.4.1 Sample Space, Events, Independence
    11.4.2 Variance
    11.4.3 Variance of Sum of Independent Random Variables
    11.4.4 Covariance
    11.4.5 The Central Limit Theorem
    11.4.6 Median
    11.4.7 Unbiased Estimators
    11.4.8 Probability Distributions
    11.4.9 Maximum Likelihood Estimation MLE
    11.4.10 Tail Bounds
    11.4.11 Chernoff Bounds: Bounding of Large Deviations
    11.4.12 Hoeffding's Inequality
    11.5 Generating Functions
    11.5.1 Generating Functions for Sequences Defined by Recurrence Relationships
    11.5.2 Exponential Generating Function
    11.6 Eigenvalues and Eigenvectors
    11.6.1 Eigenvalues and Eigenvectors
    11.6.2 Symmetric Matrices
    11.6.3 Extremal Properties of Eigenvalues
    11.6.4 Eigenvalues of the Sum of Two Symmetric Matrices
    11.6.5 Norms
    11.6.6 Important Norms and Their Properties
    11.6.7 Linear Algebra
    11.6.8 Distance Between Subspaces
    11.7 Miscellaneous
    11.7.1 Variational Methods
    11.7.2 Hash Functions
    11.7.3 Catalan Numbers
    11.7.4 Sperner's Lemma
    11.8 Exercises
    Index
    References




    "
     
    網友評論  我們期待著您對此商品發表評論
     
    相關商品
    在線留言 商品價格為新臺幣
    關於我們 送貨時間 安全付款 會員登入 加入會員 我的帳戶 網站聯盟
    DVD 連續劇 Copyright © 2024, Digital 了得網 Co., Ltd.
    返回頂部