[ 收藏 ] [ 繁体中文 ]  
臺灣貨到付款、ATM、超商、信用卡PAYPAL付款,4-7個工作日送達,999元臺幣免運費   在線留言 商品價格為新臺幣 
首頁 電影 連續劇 音樂 圖書 女裝 男裝 童裝 內衣 百貨家居 包包 女鞋 男鞋 童鞋 計算機周邊

商品搜索

 类 别:
 关键字:
    

商品分类

  •  管理

     一般管理学
     市场/营销
     会计
     金融/投资
     经管音像
     电子商务
     创业企业与企业家
     生产与运作管理
     商务沟通
     战略管理
     商业史传
     MBA
     管理信息系统
     工具书
     外文原版/影印版
     管理类职称考试
     WTO
     英文原版书-管理
  •  投资理财

     证券/股票
     投资指南
     理财技巧
     女性理财
     期货
     基金
     黄金投资
     外汇
     彩票
     保险
     购房置业
     纳税
     英文原版书-投资理财
  •  经济

     经济学理论
     经济通俗读物
     中国经济
     国际经济
     各部门经济
     经济史
     财政税收
     区域经济
     统计 审计
     贸易政策
     保险
     经济数学
     各流派经济学说
     经济法
     工具书
     通货膨胀
     财税外贸保险类考试
     英文原版书-经济
  •  社会科学

     语言文字
     社会学
     文化人类学/人口学
     新闻传播出版
     社会科学总论
     图书馆学/档案学
     经典名家作品集
     教育
     英文原版书-社会科学
  •  哲学

     哲学知识读物
     中国古代哲学
     世界哲学
     哲学与人生
     周易
     哲学理论
     伦理学
     哲学史
     美学
     中国近现代哲学
     逻辑学
     儒家
     道家
     思维科学
     马克思主义哲学
     经典作品及研究
     科学哲学
     教育哲学
     语言哲学
     比较哲学
  •  宗教

  •  心理学

  •  古籍

  •  文化

  •  历史

     历史普及读物
     中国史
     世界史
     文物考古
     史家名著
     历史地理
     史料典籍
     历史随笔
     逸闻野史
     地方史志
     史学理论
     民族史
     专业史
     英文原版书-历史
     口述史
  •  传记

  •  文学

  •  艺术

     摄影
     绘画
     小人书/连环画
     书法/篆刻
     艺术设计
     影视/媒体艺术
     音乐
     艺术理论
     收藏/鉴赏
     建筑艺术
     工艺美术
     世界各国艺术概况
     民间艺术
     雕塑
     戏剧艺术/舞台艺术
     艺术舞蹈
     艺术类考试
     人体艺术
     英文原版书-艺术
  •  青春文学

  •  文学

     中国现当代随笔
     文集
     中国古诗词
     外国随笔
     文学理论
     纪实文学
     文学评论与鉴赏
     中国现当代诗歌
     外国诗歌
     名家作品
     民间文学
     戏剧
     中国古代随笔
     文学类考试
     英文原版书-文学
  •  法律

     小说
     世界名著
     作品集
     中国古典小说
     四大名著
     中国当代小说
     外国小说
     科幻小说
     侦探/悬疑/推理
     情感
     魔幻小说
     社会
     武侠
     惊悚/恐怖
     历史
     影视小说
     官场小说
     职场小说
     中国近现代小说
     财经
     军事
  •  童书

  •  成功/励志

  •  政治

  •  军事

  •  科普读物

  •  计算机/网络

     程序设计
     移动开发
     人工智能
     办公软件
     数据库
     操作系统/系统开发
     网络与数据通信
     CAD CAM CAE
     计算机理论
     行业软件及应用
     项目管理 IT人文
     计算机考试认证
     图形处理 图形图像多媒体
     信息安全
     硬件
     项目管理IT人文
     网络与数据通信
     软件工程
     家庭与办公室用书
  •  建筑

  •  医学

     中医
     内科学
     其他临床医学
     外科学
     药学
     医技学
     妇产科学
     临床医学理论
     护理学
     基础医学
     预防医学/卫生学
     儿科学
     医学/药学考试
     医院管理
     其他医学读物
     医学工具书
  •  自然科学

     数学
     生物科学
     物理学
     天文学
     地球科学
     力学
     科技史
     化学
     总论
     自然科学类考试
     英文原版书-自然科学
  •  工业技术

     环境科学
     电子通信
     机械/仪表工业
     汽车与交通运输
     电工技术
     轻工业/手工业
     化学工业
     能源与动力工程
     航空/航天
     水利工程
     金属学与金属工艺
     一般工业技术
     原子能技术
     安全科学
     冶金工业
     矿业工程
     工具书/标准
     石油/天然气工业
     原版书
     武器工业
     英文原版书-工业技
  •  农业/林业

  •  外语

  •  考试

  •  教材

  •  工具书

  •  中小学用书

  •  中小学教科书

  •  动漫/幽默

  •  烹饪/美食

  •  时尚/美妆

  •  旅游/地图

  •  家庭/家居

  •  亲子/家教

  •  两性关系

  •  育儿/早教

     保健/养生
     体育/运动
     手工/DIY
     休闲/爱好
     英文原版书
     港台图书
     研究生
     工学
     公共课
     经济管理
     理学
     农学
     文法类
     医学
  • 經典與量子信息論(影印版)
    該商品所屬分類:工業技術 -> 電子通信
    【市場價】
    1744-2528
    【優惠價】
    1090-1580
    【作者】 法德敘維勒 
    【所屬類別】 圖書  工業技術  電子通信  無線通信 
    【出版社】科學出版社 
    【ISBN】9787030365101
    【折扣說明】一次購物滿999元台幣免運費+贈品
    一次購物滿2000元台幣95折+免運費+贈品
    一次購物滿3000元台幣92折+免運費+贈品
    一次購物滿4000元台幣88折+免運費+贈品
    【本期贈品】①優質無紡布環保袋,做工棒!②品牌簽字筆 ③品牌手帕紙巾
    版本正版全新電子版PDF檔
    您已选择: 正版全新
    溫馨提示:如果有多種選項,請先選擇再點擊加入購物車。
    *. 電子圖書價格是0.69折,例如了得網價格是100元,電子書pdf的價格則是69元。
    *. 購買電子書不支持貨到付款,購買時選擇atm或者超商、PayPal付款。付款後1-24小時內通過郵件傳輸給您。
    *. 如果收到的電子書不滿意,可以聯絡我們退款。謝謝。
    內容介紹



    開本:16開
    紙張:膠版紙
    包裝:平裝

    是否套裝:否
    國際標準書號ISBN:9787030365101
    作者:(法)德敘維勒

    出版社:科學出版社
    出版時間:2013年01月 

        
        
    "

    內容簡介

    《經典與量子信息論(英文版)》完整地敘述了經典信息論和量子信息論,首先介紹了香農熵的基本概念和各種應用,然後介紹了量子信息和量子計算的核心特點。《經典與量子信息論(英文版)》從經典信息論和量子信息論的角度,介紹了編碼、壓縮、糾錯、加密和信道容量等內容,采用非正式但科學的精確方法,為讀者提供了理解量子門和電路的知識。
    《經典與量子信息論(英文版)》自始至終都在向讀者介紹重要的結論,而不是讓讀者迷失在數學推導的細節中,並且配有大量的實踐案例和章後習題,適合電子、通信、計算機等專業的研究生和科研人員學習參考。

    目錄
    Foreword
    Introduction
    1 Probability basics
    1.1 Events,event space,and probabilities
    1.2 Combinatorics
    1.3 Combined,joint,and conditional probabilities
    1.4 Exercises
    2 Probability distributions
    2.1 Mean and variance
    2.2 Exponential,Poisson,and binomial distributions
    2.3 Continuous distributions
    2.4 Uniform,exponential,and Gaussian(normal)distributions
    2.5 Central-limit theorem
    2.6 Exercises

    Foreword
    Introduction
    1 Probability basics
    1.1 Events,event space,and probabilities
    1.2 Combinatorics
    1.3 Combined,joint,and conditional probabilities
    1.4 Exercises
    2 Probability distributions
    2.1 Mean and variance
    2.2 Exponential,Poisson,and binomial distributions
    2.3 Continuous distributions
    2.4 Uniform,exponential,and Gaussian(normal)distributions
    2.5 Central-limit theorem
    2.6 Exercises
    3 Measuring information
    3.1 Making sense of information
    3.2 Measuring information
    3.3 Information bits
    3.4 Rényi?s fake coin
    3.5 Exercises
    4 Entropy
    4.1 From Boltzmann to Shannon
    4.2 Entropy in dice
    4.3 Language entropy
    4.4 Maximum entropy(discrete source)
    4.5 Exercises
    5 Mutual information and more entropies
    5.1 Joint and conditional entropies
    5.2 Mutual information
    5.3 Relative entropy
    5.4 Exercises
    6 Differential entropy
    6.1 Entropy of continuous sources
    6.2 Maximum entropy(continuous source)
    6.3 Exercises
    7 Algorithmic entropy and Kolmogorov complexity
    7.1 Defining algorithmic entropy
    7.2 The Turing machine
    7.3 Universal Turing machine
    7.4 Kolmogorov complexity
    7.5 Kolmogorov complexity vs. Shannon?s entropy
    7.6 Exercises
    8 Information coding
    8.1 Coding numbers
    8.2 Coding language
    8.3 The Morse code
    8.4 Mean code length and coding efficiency
    8.5 Optimizing coding efficiency
    8.6 Shannon?s source-coding theorem
    8.7 Exercises
    9 Optimal coding and compression
    9.1 Huffman codes
    9.2 Data compression
    9.3 Block codes
    9.4 Exercises
    10 Integer,arithmetic,and adaptive coding
    10.1 Integer coding
    10.2 Arithmetic coding
    10.3 Adaptive Huffman coding
    10.4 Lempel-Ziv coding
    10.5 Exercises
    11 Error correction
    11.1 Communication channel
    11.2 Linear block codes
    11.3 Cyclic codes
    11.4 Error-correction code types
    11.5 Corrected bit-error-rate
    11.6 Exercises
    12 Channel entropy
    12.1 Binary symmetric channel
    12.2 Nonbinary and asymmetric discrete channels
    12.3 Channel entropy and mutual information
    12.4 Symbol error rate
    12.5 Exercises
    13 Channel capacity and coding theorem
    13.1 Channel capacity
    13.2 Typical sequences and the typical set
    13.3 Shannon?s channel coding theorem
    13.4 Exercises
    14 Gaussian channel and Shannon-Hartley theorem
    14.1 Gaussian channel
    14.2 Nonlinear channel
    14.3 Exercises
    15 Reversible computation
    15.1 Maxwell?s demon and Landauer?s principle
    15.2 From computer architecture to logic gates
    15.3 Reversible logic gates and computation
    15.4 Exercises
    16 Quantum bits and quantum gates
    16.1 Quantum bits
    16.2 Basic computations with 1-qubit quantum gates
    16.3 Quantum gates with multiple qubit inputs and outputs
    16.4 Quantum circuits
    16.5 Tensor products
    16.6 Noncloning theorem
    16.7 Exercises
    17 Quantum measurements
    17.1 Dirac notation
    17.2 Quantum measurements and types
    17.3 Quantum measurements on joint states
    17.4 Exercises
    18 Qubit measurements,superdense coding,and quantumteleportation
    18.1 Measuring single qubits
    18.2 Measuring n-qubits
    18.3 Bell state measurement
    18.4 Superdense coding
    18.5 Quantum teleportation
    18.6 Distributed quantum computing
    18.7 Exercises
    19 Deutsch-Jozsa,quantum Fourier transform,and Grover quantumdatabase search algorithms
    19.1 Deutsch algorithm
    19.2 Deutsch-Jozsa algorithm
    19.3 Quantum Fourier transform algorithm
    19.4 Grover quantum database search algorithm
    19.5 Exercises
    20 Shor?s factorization algorithm
    20.1 Phase estimation
    20.2 Order finding
    20.3 Continued fraction expansion
    20.4 From order finding to factorization
    20.5 Shor?s factorization algorithm
    20.6 Factorizing N=15 and other nontrivial composites
    20.7 Public-key cryptography
    20.8 Exercises
    21 Quantum information theory
    21.1 Von Neumann entropy
    21.2 Relative,joint,and conditional entropy,and mutualinformation
    21.3 Quantum communication channel and Holevo bound
    21.4 Exercises
    22 Quantum data compression
    22.1 Quantum data compression and fidelity
    22.2 Schumacher?s quantum coding theorem
    22.3 A graphical and numerical illustration of Schumacher?s quantumcoding theorem
    22.4 Exercises
    23 Quantum channel noise and channel capacity
    23.1 Noisy quantum channels
    23.2 The Holevo-Schumacher-Westmoreland capacity theorem
    23.3 Capacity of some quantum channels
    23.4 Exercises
    24 Quantum error correction
    24.1 Quantum repetition code
    24.2 Shor code
    24.3 Calderbank-Shor-Steine(CSS)codes
    24.4 Hadamard-Steane code
    24.5 Exercises
    25 Classical and quantum cryptography
    25.1 Message encryption,decryption,and code breaking
    25.2 Encryption and decryption with binary numbers
    25.3 Double-key encryption
    25.4 Cryptography without key exchange
    25.5 Public-key cryptography and RSA
    25.6 Data encryption standard(DES)and advanced encryptionstandard(AES)
    25.7 Quantum cryptography
    25.8 Electromagnetic waves,polarization states,photons,and quantummeasurements
    25.9 A secure photon communication channel
    25.10 The BB84 protocol for QKD
    25.11 The B92 protocol
    25.12 The EPR protocol
    25.13 Is quantum cryptography?invulnerable??
    Appendix A(Chapter 4)Boltzmann’s entropy
    Appendix B(Chapter 4)Shannon’s entropy
    Appendix C(Chapter 4)Maximum entropy of discrete sources
    Appendix D(Chapter 5)Markov chains and the second law ofthermodynamics
    Appendix E(Chapter 6)From discrete to continuous entropy
    Appendix F(Chapter 8)Kraft-McMillan inequality
    Appendix G(Chapter 9)Overview of data compression standards
    Appendix H(Chapter 10)Arithmetic coding algorithm
    Appendix I(Chapter 10)Lempel-Ziv distinct parsing
    Appendix J(Chapter 11)Error-correction capability of linear blockcodes
    Appendix K(Chapter 13)Capacity of binary communicationchannels
    Appendix L(Chapter 13)Converse proof of the channel codingtheorem
    Appendix M(Chapter 16)Bloch sphere representation of thequbit
    Appendix N(Chapter 16)Pauli matrices,rotations,and unitaryoperators
    Appendix O(Chapter 17)Heisenberg uncertainty principle
    Appendix P(Chapter 18)Two-qubit teleportation
    Appendix Q(Chapter 19)Quantum Fourier transform circuit
    Appendix R(Chapter 20)Properties of continued fractionexpansion
    Appendix S(Chapter 20)Computation of inverse Fourier transform inthe factorization of N=21 through Shor’s algorithm
    Appendix T(Chapter 20)Modular arithmetic and Euler’s theorem
    Appendix U(Chapter 21)Klein’s inequality
    Appendix V(Chapter 21)Schmidt decomposition of joint purestates
    Appendix W(Chapter 21)State purification
    Appendix X(Chapter 21)Holevo bound
    Appendix Y(Chapter 25)Polynomial byte representation and modularmultiplication
    Index

    在線試讀
    1 Probability basics
    Because of the reader’s interest in information theory, it isassumed that, to some extent,
    he or she is relatively familiar with probability theory, its mainconcepts, theorems, and
    practical tools. Whether a graduate student or a confirmedprofessional, it is possible,
    however, that a good fraction, if not all of this backgroundknowledge has been somewhat
    forgotten over time, or has become a bit rusty, or even worse,completely obliterated by
    one’s academic or professional specialization!
    This is why this book includes a couple of chapters on probabilitybasics. Should
    such basics be crystal clear in the reader’s mind, however, thenthese two chapters could
    be skipped at once. They can always be revisited later for backup,should some of the
    associated concepts and tools present any hurdles in the followingchapters. This being
    stated, some expert readers may yet dare testing their knowledge byconsidering some
    of this chapter’s (easy) problems, for starters. Finally, anyparent or teacher might find
    the first chapter useful to introduce children and teens toprobability.
    I have sought to make this review of probabilities basics assimple, informal, and
    practical as it could be. Just like the rest of this book, it isdefinitely not intended to be a
    math course, according to the canonic theorem–proof–lemma–examplesuite. There exist
    scores of rigorous books on probability theory at all levels, aswell as many Internet sites
    providing elementary tutorials on the subject. But one will findthere either too much or

    1 Probability basics
    Because of the reader’s interest in information theory, it isassumed that, to some extent,
    he or she is relatively familiar with probability theory, its mainconcepts, theorems, and
    practical tools. Whether a graduate student or a confirmedprofessional, it is possible,
    however, that a good fraction, if not all of this backgroundknowledge has been somewhat
    forgotten over time, or has become a bit rusty, or even worse,completely obliterated by
    one’s academic or professional specialization!
    This is why this book includes a couple of chapters on probabilitybasics. Should
    such basics be crystal clear in the reader’s mind, however, thenthese two chapters could
    be skipped at once. They can always be revisited later for backup,should some of the
    associated concepts and tools present any hurdles in the followingchapters. This being
    stated, some expert readers may yet dare testing their knowledge byconsidering some
    of this chapter’s (easy) problems, for starters. Finally, anyparent or teacher might find
    the first chapter useful to introduce children and teens toprobability.
    I have sought to make this review of probabilities basics assimple, informal, and
    practical as it could be. Just like the rest of this book, it isdefinitely not intended to be a
    math course, according to the canonic theorem–proof–lemma–examplesuite. There exist
    scores of rigorous books on probability theory at all levels, aswell as many Internet sites
    providing elementary tutorials on the subject. But one will findthere either too much or
    too little material to approach Information Theory, leading topotential discouragement.
    Here, I shall be content with only those elements and tools thatare needed or are used in
    this book. I present them in an original and straightforward way,using fun examples. I
    have no concern to be rigorous and complete in the academic sense,but only to remain
    accurate and clear in all possible simplifications. With thisapproach, even a reader who
    has had little or no exposure to probability theory should also beable to enjoy the rest
    of this book.
    1.1 Events, event space, and probabilities
    Aswe experience it, reality can be viewed as made of differentenvironments or situations
    in time and space, where a variety of possible events may takeplace. Consider dull
    and boring life events. Excluding future possibilities, basicevents can be anything
    like:
    It is raining,
    I miss the train,
    Mom calls,
    The check is in the mail,
    The flight has been delayed,
    The light bulb is burnt out,
    The client signed the contract,
    The team won the game.
    Here, the events are defined in the present or past tense, meaningthat they are known
    facts. These known facts represent something that is either true orfalse, experienced
    or not, verified or not. If I say, “Tomorrow will be raining,” thisis only an assumption
    concerning the future, which may or may not turn out to be true(for that matter, weather
    forecasts do not enjoy universal trust). Then tomorrow will tell,with rain being a more
    likely possibility among other ones. Thus, future events, as we mayexpect them to come
    out, are well defined facts associated with some degree oflikelihood. If we are amidst
    the Sahara desert or in Paris on a day in November, then rain as anevent is associated
    with a very low or a very high likelihood, respectively. Yet, thatday precisely it may
    rain in the desert or it may shine in Paris, against allpreconceived certainties. To make
    things even more complex (and for that matter, to make lifeexciting), a few other events
    may occur, which weren’t included in any of our predictions.
    Within a given environment of causes and effects, one can make alist of all possible
    events. The set of events is referred to as an event space (alsocalled sample space).
    The event space includes anything that can possibly happen.1 In thecase of a sports
    match between opposing two teams, A and B, for instance, the basicevent space is the
    four-element set:
    with it being implicit that if team A wins, then team B loses, andthe reverse. We can
    then say that the events “team A wins” and “team B loses” arestrictly equivalent, and
    need not be listed twice in the event space. People may take betsas to which team is
    likely to win (not without some local or affective bias). There maybe a draw, or the
    game may be canceled because of a storm or an earthquake, in thatorder of likelihood.
    This pretty much closes the event space.
    When considering a trial or an experiment, events are referred toas outcomes. An
    experiment may consist of picking up a card from a 32-card deck.One out of the 32
    possible outcomes is the card being the Queen of Hearts. The eventspace associated
    1 In any environment, the list of possible events is generallyinfinite. One may then conceive of the event space
    as a limited set of well defined events which encompass all knownpossibilities at the time of the inventory.
    If other unknown possibilities exist, then an event category called“other” can be introduced to close the
    event space.
    with this experiment is the list of all 32 cards. Anotherexperiment may consist in
    picking up two cards successively, which defines a different eventspace, as illustrated in
    Section 1.3, which concerns combined and joint events.
    The probability is the mathematical measure of the likelihoodassociated with a given
    event. Thismeasure is called p(event). By definition, themeasureranges in a zero-to-one
    scale. Consistently with this definition, p(event) = 0 means thatthe event is absolutely
    unlikely or “impossible,” and p(event) = 1 is absolutelycertain.
    Let us not discuss here what “absolutely” or “impossible” mightreally mean in
    our physical world. As we know, such extreme notions are onlyrelative ones! Simply
    defined, without purchasing a ticket, it is impossible to win thelottery! And driving
    50 mph above the speed limit while passing in front of a policepatrol leads to absolute
    certainty of getting a ticket. Let’s leave alone the weakpossibilities of finding by chance
    the winning lottery ticket on the curb, or that the police officerturns out to be an old
    schoolmate. That’s part of the event space, too, but let’s notstretch reality too far. Let us
    then be satisfied here with the intuitive notions thatimpossibility and absolute certainty
    do actually exist.
    Next, formalize what has just been described. A set of differentevents in a family
    called x may be labeled according to a series x1, x2, . . . , xN ,where N is the number of
    events in the event space S = {x1, x2, . . . , xN }. Theprobability p(event = xi ), namely,
    the probability that the outcome turns out to be the event xi, willbe noted p(xi) for
    short.
    In the general case, and as we well know, events are neither“absolutely certain” nor
    “impossible.” Therefore, their associated probabilities can be anyreal number between
    0 and 1. Formally, for all events xi belonging to the space S ={x1, x2, . . . , xN }, we
    have:
    0 ≤ p(xi ) ≤ 1. (1.2)
    Probabilities are also commonly defined as percentages. The eventis said to have
    anything between a 0% chance (impossible) and a 100% chance(absolutely certain) of
    happening, which means strictly the same as using a 0–1 scale. Forinstance, an election
    poll will give a 55% chance of a candidate winning. It isequivalent to saying that the
    odds for this candidate are 55:45, or that p(candidate wins) =0.55.
    As a fundamental rule, the sum of all probabilities associated withan event space S
    is equal to unity. Formally,
    p(x1) + p(x2)+· · · p(xN ) =
    i=N
     i=1
    p(xi ) = 1. (1.3)
    In the above, the symbol (in Greek, capital S or sigma) implies thesummation of the
    argument p(xi ) with index i being varied from i = 1 to i = N, asspecified under and
    above the sigma sign. This concise math notation is to be wellassimilated, as it will
    be used extensively throughout this book. We can interpret theabove summation rule
    according to:
    It is absolutely certain that one event in the space willoccur.
    This is another way of stating that the space includes allpossibilities, as for the game
    space defined in Eq. (1.1). I will come back to this notion inSection 1.3,when considering
    combined probabilities.
    But how are the probabilities calculated or estimated? The answerdepends on whether
    or not the event space is well or completely defined. Assume firstfor simplicity the first
    case: we know for sure all the possible events and the space iscomplete. Consider
    then two familiar games: coin tossing and playing dice, which I amgoing to use as
    examples.
    Coin tossing
    The coin has two sides, heads and tails. The experiment of tossingthe coin has two
    possible outcomes (heads or tails), if we discard any possibilitythat the coin rolls on the
    floor and stops on its edge, as a third physical outcome! To besure, the coin’s mass is
    also assumed to be uniformly distributed into both sides, and thecoin randomly flipped,
    in such a way that no side is more likely to show up than theother. The two outcomes
    are said to be equiprobable. The event space is S = {heads, tails},and, according to the
    previous assumptions, p(heads) = p(tails). Since the space includesall possibilities, we
    apply the rule in Eq. (1.3) to get p(heads) = p(tails) = 1/2 = 0.5.The odds of getting
    heads or tails are 50%. In contrast, a realistic coin massdistribution and coin flip may
    not be so perfect, so that, for instance, p(heads) = 0.55 andp(tails) = 0.45.
    Rolling dice (game 1)
    Play first with a single die. The die has six faces numbered one tosix (after their number
    of spots).As for the coin, the die is supposed to land on one face,excluding the possibility
    (however well observed in real life!) that it may stop on one ofits eight corners after
    stopping against an obstacle. Thus the event space is S = {1, 2, 3,4, 5, 6}, and with the
    equiprobability assumption, we have p(1) = p(2) = · · · = p(6) =1/6 ≈ 0.166 666 6.
    Rolling dice (game 2)
    Now play with two dice. The game consists in adding the spotsshowing in the faces.
    Taking successive turns between different players, the winner isthe one who gets the
    highest count. The sum of points varies between 1 + 1 = 2 to 6 + 6= 12, as illustrated
    in Fig. 1.1. The event space is thus S = {2, 3, 4, 5, 6, 7, 8, 9,10, 11, 12}, corresponding
    to 36 possible outcomes. Here, the key difference from the twoprevious examples is
    that the events (sum of spots) are not equiprobable. It is, indeed,seen from the figure
    that there exist six possibilities of obtaining the number x = 7,while there is only one
    possibility of obtaining either the number x = 2 or the number x =12. The count of
    possibilities is shown in the graph in Fig. 1.2(a).
    Such a graph is referred to as a histogram. If one divides thenumber of counts by
    the total number of possibilities (here 36), one obtains thecorresponding probabilities.
    For instance, p(x = 2) = p(x = 22) = 1/36 = 0.028, and p(x = 7) =6/36 = 0.167.
    The different probabilities are plotted in Fig. 1.2(b). To completethe plot, we have
    Figure 1.1 The 36 possible outcomes of counting points from castingtwo dice.
    Figure 1.2 (a) Number of possibilities associated with eachpossible outcome of casting two dice,
    (b) corresponding probability distribution.
    included the two count events x = 1 and x = 13, which both havezero probability.
    Such a plot is referred to as the probability distribution; it isalso called the probability
    distribution function (PDF). See more in Chapter 2 on PDFs andexamples. Consistently
    with the rule in Eq. (1.3), the sum of all probabilities is equalto unity. It is equivalent
    to say that the surface between the PDF curve linking the differentpoints (x, p(x)) and
    the horizontal axis is unity. Indeed, this surface is given by s =(13 ? 1)?p(x = 7)/2 =
    12?(6/36)/2≡1.
    The last example allows us to introduce a fundamental definition ofthe probability
    p (xi ) in the general case where the events xi in the space S ={x1, x2, . . . , xN } do not
    have equal likelihood:
    This general definition has been used in the three previousexamples. The single coin
    tossing or single die casting are characterized by equiprobableevents, in which case the
    PDF is said to be uniform. In the case of the two-dice roll, thePDF is nonuniform with
    a triangular shape, and peaks about the event x = 7, as we havejust seen.
    Here we are reaching a subtle point in the notion of probability,which is often
    mistaken or misunderstood. The known fact that, in principle, aflipped coin has equal
    chances to fall on heads or tails provides no clue as to what theoutcome will be. We may
    just observe the coin falling on tails several times in a row,before it finally chooses to
    fall on heads, as the reader can easily check (try doing theexperiment!). Therefore, the
    meaning of a probability is not the prediction of the outcome(event x being verified) but
    the measure of how likely such an event is. Therefore, it actuallytakes quite a number
    of trials to measure such likelihood: one trial is surely notenough, and worse, several
    trials could lead to the wrong measure. To sense the differencebetween probability and
    outcome better, and to get a notion of how many trials could berequired to approach a
    good measure, let’s go through a realistic coin-tossingexperiment.
    First, it is important to practice a little bit in order to knowhow to flip the coin with a
    good feeling of randomness (the reader will find that such afeeling is far from obvious!).
    The experiment may proceed as follows: flip the coin then recordthe result on a piece
    of paper (heads = H, tails = T), and make a pause once in a whileto enter the data in
    a computer spreadsheet (it being important for concentration andexpediency not to try
    performing the two tasks altogether). The interest of the computerspreadsheet is the
    possibility of seeing the statistics plotted as the experimentunfolds. This creates a real
    sense of fun. Actually, the computer should plot the cumulativecount of heads and tails,
    as well as the experimental PDF calculated at each step from Eq.(1.4), which for clarity
    I reformulate as follows:
    The plots of the author’s own experiment, by means of 700successive trials, are shown
    in Fig. 1.3. The first figure shows the cumulative counts for headsand tails, while the
    second figure shows plots of the corresponding experimentalprobabilities p(heads),
    p(tails) as the number of trials increases. As expected, the countsfor heads and tails are
    seemingly equal, at leastwhen considering large numbers. However,the detail shows that
    time and again, the counts significantly depart from each other,meaning that there are
    more heads than tails or the reverse. But eventually thesediscrepancies seem to correct
    Figure 1.3 Experimental determination of the probabilitydistribution of coin flipping, by means
    of 700 successive trials: (a) cumulative count of head and tailsoutcomes with inset showing
    detail for the first 100 trials, (b) correspondingprobabilities.
    themselves as the game progresses, as if the coin would “know” howto come back to
    the 50:50 odds rule. Strange isn’t it? The discrepancies betweencounts are reflected
    by the wide oscillations of the PDF (Fig. 1.3(b)). But as theexperiment progresses,
    the oscillations are damped to the point where p(heads) ≈ p(tails)≈ 0.5, following an
    asymptotic behavior.2
    2 Performing this experiment and obtaining such results is notstraightforward. Different types of coins must
    be tested first, some being easier to flip than others, because ofsize or mass. Some coins seem not to lead



     
    網友評論  我們期待著您對此商品發表評論
     
    相關商品
    在線留言 商品價格為新臺幣
    關於我們 送貨時間 安全付款 會員登入 加入會員 我的帳戶 網站聯盟
    DVD 連續劇 Copyright © 2024, Digital 了得網 Co., Ltd.
    返回頂部