# Press, Teukolsly, Vetterling, Flannery - Numerical Recipes in C

Ïîñìîòðåòü àðõèâ öåëèêîìÏðîñìîòðåòü ôàéë â îòäåëüíîì îêíå: cfea14b9ffe22e5f109e30f87f3358f2.pdf

Numerical Recipes in C

The Art of Scientific Computing

Second Edition

William H. Press

Harvard-Smithsonian Center for Astrophysics

Saul A. Teukolsky

Department of Physics, Cornell University

William T. Vetterling

Polaroid Corporation

Brian P. Flannery

EXXON Research and Engineering Company

CAMBRIDGE UNIVERSITY PRESS

Cambridge New York Port Chester Melbourne Sydney

Published by the Press Syndicate of the University of Cambridge

The Pitt Building, Trumpington Street, Cambridge CB2 1RP

40 West 20th Street, New York, NY 10011-4211, USA

10 Stamford Road, Oakleigh, Melbourne 3166, Australia

c Cambridge University Press 1988, 1992

Copyright

except for §13.10 and Appendix B, which are placed into the public domain,

and except for all other computer programs and procedures, which are

c Numerical Recipes Software 1987, 1988, 1992, 1997

Copyright

All Rights Reserved.

Some sections of this book were originally published, in different form, in Computers

c American Institute of Physics, 1988–1992.

in Physics magazine, Copyright

First Edition originally published 1988; Second Edition originally published 1992.

Reprinted with corrections, 1993, 1994, 1995, 1997.

This reprinting is corrected to software version 2.08

Printed in the United States of America

Typeset in TEX

Without an additional license to use the contained software, this book is intended as

a text and reference book, for reading purposes only. A free license for limited use of the

software by the individual owner of a copy of this book who personally types one or more

routines into a single computer is granted under terms described on p. xvii. See the section

“License Information” (pp. xvi–xviii) for information on obtaining more general licenses

at low cost.

Machine-readable media containing the software in this book, with included licenses

for use on a single screen, are available from Cambridge University Press. See the

order form at the back of the book, email to “orders@cup.org” (North America) or

“trade@cup.cam.ac.uk” (rest of world), or write to Cambridge University Press, 110

Midland Avenue, Port Chester, NY 10573 (USA), for further information.

The software may also be downloaded, with immediate purchase of a license

also possible, from the Numerical Recipes Software Web Site (http://www.nr.com).

Unlicensed transfer of Numerical Recipes programs to any other format, or to any computer

except one that is specifically licensed, is strictly prohibited. Technical questions,

corrections, and requests for information should be addressed to Numerical Recipes

Software, P.O. Box 243, Cambridge, MA 02238 (USA), email “info@nr.com”, or fax

781 863-1739.

Library of Congress Cataloging in Publication Data

Numerical recipes in C : the art of scientific computing / William H. Press

. . . [et al.]. – 2nd ed.

Includes bibliographical references (p.

) and index.

ISBN 0-521-43108-5

1. Numerical analysis–Computer programs. 2. Science–Mathematics–Computer programs.

3. C (Computer program language) I. Press, William H.

QA297.N866 1992

519.4028553–dc20

92-8876

A catalog record for this book is available from the British Library.

ISBN

ISBN

ISBN

ISBN

ISBN

0

0

0

0

0

521 43108

521 43720

521 43724

521 57608

521 57607

5

2

5

3

5

Book

Example book in C

C diskette (IBM 3.5 , 1.44M)

CDROM (IBM PC/Macintosh)

CDROM (UNIX)

Contents

Preface to the Second Edition

1

Preface to the First Edition

xiv

License Information

xvi

Computer Programs by Chapter and Section

xix

Preliminaries

1

1.0 Introduction

1.1 Program Organization and Control Structures

1.2 Some C Conventions for Scientific Computing

1.3 Error, Accuracy, and Stability

2

3

xi

1

5

15

28

Solution of Linear Algebraic Equations

32

2.0 Introduction

2.1 Gauss-Jordan Elimination

2.2 Gaussian Elimination with Backsubstitution

2.3 LU Decomposition and Its Applications

2.4 Tridiagonal and Band Diagonal Systems of Equations

2.5 Iterative Improvement of a Solution to Linear Equations

2.6 Singular Value Decomposition

2.7 Sparse Linear Systems

2.8 Vandermonde Matrices and Toeplitz Matrices

2.9 Cholesky Decomposition

2.10 QR Decomposition

2.11 Is Matrix Inversion an N 3 Process?

32

36

41

43

50

55

59

71

90

96

98

102

Interpolation and Extrapolation

3.0 Introduction

3.1 Polynomial Interpolation and Extrapolation

3.2 Rational Function Interpolation and Extrapolation

3.3 Cubic Spline Interpolation

3.4 How to Search an Ordered Table

3.5 Coefficients of the Interpolating Polynomial

3.6 Interpolation in Two or More Dimensions

v

105

105

108

111

113

117

120

123

vi

4

Contents

Integration of Functions

4.0 Introduction

4.1 Classical Formulas for Equally Spaced Abscissas

4.2 Elementary Algorithms

4.3 Romberg Integration

4.4 Improper Integrals

4.5 Gaussian Quadratures and Orthogonal Polynomials

4.6 Multidimensional Integrals

5

Evaluation of Functions

5.0 Introduction

5.1 Series and Their Convergence

5.2 Evaluation of Continued Fractions

5.3 Polynomials and Rational Functions

5.4 Complex Arithmetic

5.5 Recurrence Relations and Clenshaw’s Recurrence Formula

5.6 Quadratic and Cubic Equations

5.7 Numerical Derivatives

5.8 Chebyshev Approximation

5.9 Derivatives or Integrals of a Chebyshev-approximated Function

5.10 Polynomial Approximation from Chebyshev Coefficients

5.11 Economization of Power Series

5.12 Pad?e Approximants

5.13 Rational Chebyshev Approximation

5.14 Evaluation of Functions by Path Integration

6

Special Functions

6.0 Introduction

6.1 Gamma Function, Beta Function, Factorials, Binomial Coefficients

6.2 Incomplete Gamma Function, Error Function, Chi-Square

Probability Function, Cumulative Poisson Function

6.3 Exponential Integrals

6.4 Incomplete Beta Function, Student’s Distribution, F-Distribution,

Cumulative Binomial Distribution

6.5 Bessel Functions of Integer Order

6.6 Modified Bessel Functions of Integer Order

6.7 Bessel Functions of Fractional Order, Airy Functions, Spherical

Bessel Functions

6.8 Spherical Harmonics

6.9 Fresnel Integrals, Cosine and Sine Integrals

6.10 Dawson’s Integral

6.11 Elliptic Integrals and Jacobian Elliptic Functions

6.12 Hypergeometric Functions

7

Random Numbers

7.0 Introduction

7.1 Uniform Deviates

129

129

130

136

140

141

147

161

165

165

165

169

173

176

178

183

186

190

195

197

198

200

204

208

212

212

213

216

222

226

230

236

240

252

255

259

261

271

274

274

275

Contents

7.2 Transformation Method: Exponential and Normal Deviates

7.3 Rejection Method: Gamma, Poisson, Binomial Deviates

7.4 Generation of Random Bits

7.5 Random Sequences Based on Data Encryption

7.6 Simple Monte Carlo Integration

7.7 Quasi- (that is, Sub-) Random Sequences

7.8 Adaptive and Recursive Monte Carlo Methods

8

Sorting

8.0 Introduction

8.1 Straight Insertion and Shell’s Method

8.2 Quicksort

8.3 Heapsort

8.4 Indexing and Ranking

8.5 Selecting the M th Largest

8.6 Determination of Equivalence Classes

9

Root Finding and Nonlinear Sets of Equations

9.0 Introduction

9.1 Bracketing and Bisection

9.2 Secant Method, False Position Method, and Ridders’ Method

9.3 Van Wijngaarden–Dekker–Brent Method

9.4 Newton-Raphson Method Using Derivative

9.5 Roots of Polynomials

9.6 Newton-Raphson Method for Nonlinear Systems of Equations

9.7 Globally Convergent Methods for Nonlinear Systems of Equations

10 Minimization or Maximization of Functions

10.0 Introduction

10.1 Golden Section Search in One Dimension

10.2 Parabolic Interpolation and Brent’s Method in One Dimension

10.3 One-Dimensional Search with First Derivatives

10.4 Downhill Simplex Method in Multidimensions

10.5 Direction Set (Powell’s) Methods in Multidimensions

10.6 Conjugate Gradient Methods in Multidimensions

10.7 Variable Metric Methods in Multidimensions

10.8 Linear Programming and the Simplex Method

10.9 Simulated Annealing Methods

11 Eigensystems

11.0 Introduction

11.1 Jacobi Transformations of a Symmetric Matrix

11.2 Reduction of a Symmetric Matrix to Tridiagonal Form:

Givens and Householder Reductions

11.3 Eigenvalues and Eigenvectors of a Tridiagonal Matrix

11.4 Hermitian Matrices

11.5 Reduction of a General Matrix to Hessenberg Form

vii

287

290

296

300

304

309

316

329

329

330

332

336

338

341

345

347

347

350

354

359

362

369

379

383

394

394

397

402

405

408

412

420

425

430

444

456

456

463

469

475

481

482

viii

Contents

11.6 The QR Algorithm for Real Hessenberg Matrices

11.7 Improving Eigenvalues and/or Finding Eigenvectors by

Inverse Iteration

12 Fast Fourier Transform

12.0 Introduction

12.1 Fourier Transform of Discretely Sampled Data

12.2 Fast Fourier Transform (FFT)

12.3 FFT of Real Functions, Sine and Cosine Transforms

12.4 FFT in Two or More Dimensions

12.5 Fourier Transforms of Real Data in Two and Three Dimensions

12.6 External Storage or Memory-Local FFTs

13 Fourier and Spectral Applications

13.0 Introduction

13.1 Convolution and Deconvolution Using the FFT

13.2 Correlation and Autocorrelation Using the FFT

13.3 Optimal (Wiener) Filtering with the FFT

13.4 Power Spectrum Estimation Using the FFT

13.5 Digital Filtering in the Time Domain

13.6 Linear Prediction and Linear Predictive Coding

13.7 Power Spectrum Estimation by the Maximum Entropy

(All Poles) Method

13.8 Spectral Analysis of Unevenly Sampled Data

13.9 Computing Fourier Integrals Using the FFT

13.10 Wavelet Transforms

13.11 Numerical Use of the Sampling Theorem

14 Statistical Description of Data

14.0 Introduction

14.1 Moments of a Distribution: Mean, Variance, Skewness,

and So Forth

14.2 Do Two Distributions Have the Same Means or Variances?

14.3 Are Two Distributions Different?

14.4 Contingency Table Analysis of Two Distributions

14.5 Linear Correlation

14.6 Nonparametric or Rank Correlation

14.7 Do Two-Dimensional Distributions Differ?

14.8 Savitzky-Golay Smoothing Filters

15 Modeling of Data

15.0 Introduction

15.1 Least Squares as a Maximum Likelihood Estimator

15.2 Fitting Data to a Straight Line

15.3 Straight-Line Data with Errors in Both Coordinates

15.4 General Linear Least Squares

15.5 Nonlinear Models

486

493

496

496

500

504

510

521

525

532

537

537

538

545

547

549

558

564

572

575

584

591

606

609

609

610

615

620

628

636

639

645

650

656

656

657

661

666

671

681

Contents

15.6 Confidence Limits on Estimated Model Parameters

15.7 Robust Estimation

16 Integration of Ordinary Differential Equations

16.0 Introduction

16.1 Runge-Kutta Method

16.2 Adaptive Stepsize Control for Runge-Kutta

16.3 Modified Midpoint Method

16.4 Richardson Extrapolation and the Bulirsch-Stoer Method

16.5 Second-Order Conservative Equations

16.6 Stiff Sets of Equations

16.7 Multistep, Multivalue, and Predictor-Corrector Methods

17 Two Point Boundary Value Problems

17.0 Introduction

17.1 The Shooting Method

17.2 Shooting to a Fitting Point

17.3 Relaxation Methods

17.4 A Worked Example: Spheroidal Harmonics

17.5 Automated Allocation of Mesh Points

17.6 Handling Internal Boundary Conditions or Singular Points

18 Integral Equations and Inverse Theory

18.0 Introduction

18.1 Fredholm Equations of the Second Kind

18.2 Volterra Equations

18.3 Integral Equations with Singular Kernels

18.4 Inverse Problems and the Use of A Priori Information

18.5 Linear Regularization Methods

18.6 Backus-Gilbert Method

18.7 Maximum Entropy Image Restoration

19 Partial Differential Equations

19.0 Introduction

19.1 Flux-Conservative Initial Value Problems

19.2 Diffusive Initial Value Problems

19.3 Initial Value Problems in Multidimensions

19.4 Fourier and Cyclic Reduction Methods for Boundary

Value Problems

19.5 Relaxation Methods for Boundary Value Problems

19.6 Multigrid Methods for Boundary Value Problems

20 Less-Numerical Algorithms

20.0 Introduction

20.1 Diagnosing Machine Parameters

20.2 Gray Codes

ix

689

699

707

707

710

714

722

724

732

734

747

753

753

757

760

762

772

783

784

788

788

791

794

797

804

808

815

818

827

827

834

847

853

857

863

871

889

889

889

894

x

Contents

20.3 Cyclic Redundancy and Other Checksums

20.4 Huffman Coding and Compression of Data

20.5 Arithmetic Coding

20.6 Arithmetic at Arbitrary Precision

896

903

910

915

References

926

Appendix A: Table of Prototype Declarations

930

Appendix B: Utility Routines

940

Appendix C: Complex Arithmetic

948

Index of Programs and Dependencies

951

General Index

965

Preface to the Second Edition

Our aim in writing the original edition of Numerical Recipes was to provide a

book that combined general discussion, analytical mathematics, algorithmics, and

actual working programs. The success of the first edition puts us now in a difficult,

though hardly unenviable, position. We wanted, then and now, to write a book

that is informal, fearlessly editorial, unesoteric, and above all useful. There is a

danger that, if we are not careful, we might produce a second edition that is weighty,

balanced, scholarly, and boring.

It is a mixed blessing that we know more now than we did six years ago. Then,

we were making educated guesses, based on existing literature and our own research,

about which numerical techniques were the most important and robust. Now, we have

the benefit of direct feedback from a large reader community. Letters to our alter-ego

enterprise, Numerical Recipes Software, are in the thousands per year. (Please, don’t

telephone us.) Our post office box has become a magnet for letters pointing out

that we have omitted some particular technique, well known to be important in a

particular field of science or engineering. We value such letters, and digest them

carefully, especially when they point us to specific references in the literature.

The inevitable result of this input is that this Second Edition of Numerical

Recipes is substantially larger than its predecessor, in fact about 50% larger both in

words and number of included programs (the latter now numbering well over 300).

“Don’t let the book grow in size,” is the advice that we received from several wise

colleagues. We have tried to follow the intended spirit of that advice, even as we

violate the letter of it. We have not lengthened, or increased in difficulty, the book’s

principal discussions of mainstream topics. Many new topics are presented at this

same accessible level. Some topics, both from the earlier edition and new to this

one, are now set in smaller type that labels them as being “advanced.” The reader

who ignores such advanced sections completely will not, we think, find any lack of

continuity in the shorter volume that results.

Here are some highlights of the new material in this Second Edition:

• a new chapter on integral equations and inverse methods

• a detailed treatment of multigrid methods for solving elliptic partial

differential equations

• routines for band diagonal linear systems

• improved routines for linear algebra on sparse matrices

• Cholesky and QR decomposition

• orthogonal polynomials and Gaussian quadratures for arbitrary weight

functions

• methods for calculating numerical derivatives

• Pad?e approximants, and rational Chebyshev approximation

• Bessel functions, and modified Bessel functions, of fractional order; and

several other new special functions

• improved random number routines

• quasi-random sequences

• routines for adaptive and recursive Monte Carlo integration in highdimensional spaces

• globally convergent methods for sets of nonlinear equations

xi

xii

Preface to the Second Edition

•

•

•

•

•

•

•

•

•

•

•

•

•

•

simulated annealing minimization for continuous control spaces

fast Fourier transform (FFT) for real data in two and three dimensions

fast Fourier transform (FFT) using external storage

improved fast cosine transform routines

wavelet transforms

Fourier integrals with upper and lower limits

spectral analysis on unevenly sampled data

Savitzky-Golay smoothing filters

fitting straight line data with errors in both coordinates

a two-dimensional Kolmogorov-Smirnoff test

the statistical bootstrap method

embedded Runge-Kutta-Fehlberg methods for differential equations

high-order methods for stiff differential equations

a new chapter on “less-numerical” algorithms, including Huffman and

arithmetic coding, arbitrary precision arithmetic, and several other topics.

Consult the Preface to the First Edition, following, or the Table of Contents, for a

list of the more “basic” subjects treated.

Acknowledgments

It is not possible for us to list by name here all the readers who have made

useful suggestions; we are grateful for these. In the text, we attempt to give specific

attribution for ideas that appear to be original, and not known in the literature. We

apologize in advance for any omissions.

Some readers and colleagues have been particularly generous in providing

us with ideas, comments, suggestions, and programs for this Second Edition.

We especially want to thank George Rybicki, Philip Pinto, Peter Lepage, Robert

Lupton, Douglas Eardley, Ramesh Narayan, David Spergel, Alan Oppenheim, Sallie

Baliunas, Scott Tremaine, Glennys Farrar, Steven Block, John Peacock, Thomas

Loredo, Matthew Choptuik, Gregory Cook, L. Samuel Finn, P. Deuflhard, Harold

Lewis, Peter Weinberger, David Syer, Richard Ferch, Steven Ebstein, Bradley

Keister, and William Gould. We have been helped by Nancy Lee Snyder’s mastery

of a complicated TEX manuscript. We express appreciation to our editors Lauren

Cowles and Alan Harvey at Cambridge University Press, and to our production

editor Russell Hahn. We remain, of course, grateful to the individuals acknowledged

in the Preface to the First Edition.

Special acknowledgment is due to programming consultant Seth Finkelstein,

who wrote, rewrote, or influenced many of the routines in this book, as well as in

its FORTRAN-language twin and the companion Example books. Our project has

benefited enormously from Seth’s talent for detecting, and following the trail of, even

very slight anomalies (often compiler bugs, but occasionally our errors), and from

his good programming sense. To the extent that this edition of Numerical Recipes

in C has a more graceful and “C-like” programming style than its predecessor, most

of the credit goes to Seth. (Of course, we accept the blame for the FORTRANish

lapses that still remain.)

We prepared this book for publication on DEC and Sun workstations running the UNIX operating system, and on a 486/33 PC compatible running

MS-DOS 5.0/Windows 3.0. (See §1.0 for a list of additional computers used in

xiii

Preface to the Second Edition

program tests.) We enthusiastically recommend the principal software used: GNU

Emacs, TEX, Perl, Adobe Illustrator, and PostScript. Also used were a variety of C

compilers – too numerous (and sometimes too buggy) for individual acknowledgment. It is a sobering fact that our standard test suite (exercising all the routines

in this book) has uncovered compiler bugs in many of the compilers tried. When

possible, we work with developers to see that such bugs get fixed; we encourage

interested compiler developers to contact us about such arrangements.

WHP and SAT acknowledge the continued support of the U.S. National Science

Foundation for their research on computational methods. D.A.R.P.A. support is

acknowledged for §13.10 on wavelets.

June, 1992

William H. Press

Saul A. Teukolsky

William T. Vetterling

Brian P. Flannery

Preface to the First Edition

We call this book Numerical Recipes for several reasons. In one sense, this book

is indeed a “cookbook” on numerical computation. However there is an important

distinction between a cookbook and a restaurant menu. The latter presents choices

among complete dishes in each of which the individual flavors are blended and

disguised. The former — and this book — reveals the individual ingredients and

explains how they are prepared and combined.

Another purpose of the title is to connote an eclectic mixture of presentational

techniques. This book is unique, we think, in offering, for each topic considered,

a certain amount of general discussion, a certain amount of analytical mathematics,

a certain amount of discussion of algorithmics, and (most important) actual implementations of these ideas in the form of working computer routines. Our task has

been to find the right balance among these ingredients for each topic. You will

find that for some topics we have tilted quite far to the analytic side; this where we

have felt there to be gaps in the “standard” mathematical training. For other topics,

where the mathematical prerequisites are universally held, we have tilted towards

more in-depth discussion of the nature of the computational algorithms, or towards

practical questions of implementation.

We admit, therefore, to some unevenness in the “level” of this book. About half

of it is suitable for an advanced undergraduate course on numerical computation for

science or engineering majors. The other half ranges from the level of a graduate

course to that of a professional reference. Most cookbooks have, after all, recipes at

varying levels of complexity. An attractive feature of this approach, we think, is that

the reader can use the book at increasing levels of sophistication as his/her experience

grows. Even inexperienced readers should be able to use our most advanced routines

as black boxes. Having done so, we hope that these readers will subsequently go

back and learn what secrets are inside.

If there is a single dominant theme in this book, it is that practical methods

of numerical computation can be simultaneously efficient, clever, and — important

— clear. The alternative viewpoint, that efficient computational methods must

necessarily be so arcane and complex as to be useful only in “black box” form,

we firmly reject.

Our purpose in this book is thus to open up a large number of computational

black boxes to your scrutiny. We want to teach you to take apart these black boxes

and to put them back together again, modifying them to suit your specific needs.

We assume that you are mathematically literate, i.e., that you have the normal

mathematical preparation associated with an undergraduate degree in a physical

science, or engineering, or economics, or a quantitative social science. We assume

that you know how to program a computer. We do not assume that you have any

prior formal knowledge of numerical analysis or numerical methods.

The scope of Numerical Recipes is supposed to be “everything up to, but

not including, partial differential equations.” We honor this in the breach: First,

we do have one introductory chapter on methods for partial differential equations

(Chapter 19). Second, we obviously cannot include everything else. All the so-called

“standard” topics of a numerical analysis course have been included in this book:

xiv

xv

Preface to the First Edition

linear equations (Chapter 2), interpolation and extrapolation (Chaper 3), integration

(Chaper 4), nonlinear root-finding (Chapter 9), eigensystems (Chapter 11), and

ordinary differential equations (Chapter 16). Most of these topics have been taken

beyond their standard treatments into some advanced material which we have felt

to be particularly important or useful.

Some other subjects that we cover in detail are not usually found in the standard

numerical analysis texts. These include the evaluation of functions and of particular

special functions of higher mathematics (Chapters 5 and 6); random numbers and

Monte Carlo methods (Chapter 7); sorting (Chapter 8); optimization, including

multidimensional methods (Chapter 10); Fourier transform methods, including FFT

methods and other spectral methods (Chapters 12 and 13); two chapters on the

statistical description and modeling of data (Chapters 14 and 15); and two-point

boundary value problems, both shooting and relaxation methods (Chapter 17).

The programs in this book are included in ANSI-standard C. Versions of the

book in FORTRAN, Pascal, and BASIC are available separately. We have more

to say about the C language, and the computational environment assumed by our

routines, in §1.1 (Introduction).

Acknowledgments

Many colleagues have been generous in giving us the benefit of their numerical

and computational experience, in providing us with programs, in commenting on

the manuscript, or in general encouragement. We particularly wish to thank George

Rybicki, Douglas Eardley, Philip Marcus, Stuart Shapiro, Paul Horowitz, Bruce

Musicus, Irwin Shapiro, Stephen Wolfram, Henry Abarbanel, Larry Smarr, Richard

Muller, John Bahcall, and A.G.W. Cameron.

We also wish to acknowledge two individuals whom we have never met:

Forman Acton, whose 1970 textbook Numerical Methods that Work (New York:

Harper and Row) has surely left its stylistic mark on us; and Donald Knuth, both for

his series of books on The Art of Computer Programming (Reading, MA: AddisonWesley), and for TEX, the computer typesetting language which immensely aided

production of this book.

Research by the authors on computational methods was supported in part by

the U.S. National Science Foundation.

October, 1985

William H. Press

Brian P. Flannery

Saul A. Teukolsky

William T. Vetterling

License Information

Read this section if you want to use the programs in this book on a computer.

You’ll need to read the following Disclaimer of Warranty, get the programs onto your

computer, and acquire a Numerical Recipes software license. (Without this license,

which can be the free “immediate license” under terms described below, the book is

intended as a text and reference book, for reading purposes only.)

Disclaimer of Warranty

We make no warranties, express or implied, that the programs contained

in this volume are free of error, or are consistent with any particular standard

of merchantability, or that they will meet your requirements for any particular

application. They should not be relied on for solving a problem whose incorrect

solution could result in injury to a person or loss of property. If you do use the

programs in such a manner, it is at your own risk. The authors and publisher

disclaim all liability for direct or consequential damages resulting from your

use of the programs.

How to Get the Code onto Your Computer

Pick one of the following methods:

• You can type the programs from this book directly into your computer. In this

case, the only kind of license available to you is the free “immediate license”

(see below). You are not authorized to transfer or distribute a machine-readable

copy to any other person, nor to have any other person type the programs into a

computer on your behalf. We do not want to hear bug reports from you if you

choose this option, because experience has shown that virtually all reported bugs

in such cases are typing errors!

• You can download the Numerical Recipes programs electronically from the

Numerical Recipes On-Line Software Store, located at http://www.nr.com, our

Web site. They are packaged as a password-protected file, and you’ll need to

purchase a license to unpack them. You can get a single-screen license and

password immediately, on-line, from the On-Line Store, with fees ranging from

$50 (PC, Macintosh, educational institutions’ UNIX) to $140 (general UNIX).

Downloading the packaged software from the On-Line Store is also the way to

start if you want to acquire a more general (multiscreen, site, or corporate) license.

• You can purchase media containing the programs from Cambridge University

Press. Diskette versions are available in IBM-compatible format for machines

running Windows 3.1, 95, or NT. CDROM versions in ISO-9660 format for PC,

Macintosh, and UNIX systems are also available; these include both C and Fortran

versions on a single CDROM (as well as versions in Pascal and BASIC from the

first edition). Diskettes purchased from Cambridge University Press include a

single-screen license for PC or Macintosh only. The CDROM is available with

a single-screen license for PC or Macintosh (order ISBN 0 521 576083), or (at a

slightly higher price) with a single-screen license for UNIX workstations (order

ISBN 0 521 576075). Orders for media from Cambridge University Press can

be placed at 800 872-7423 (North America only) or by email to orders@cup.org

(North America) or trade@cup.cam.ac.uk (rest of world). Or, visit the Web sites

http://www.cup.org (North America) or http://www.cup.cam.ac.uk (rest

of world).

xvi

License Information

xvii

Types of License Offered

Here are the types of licenses that we offer. Note that some types are

automatically acquired with the purchase of media from Cambridge University

Press, or of an unlocking password from the Numerical Recipes On-Line Software

Store, while other types of licenses require that you communicate specifically with

Numerical Recipes Software (email: orders@nr.com or fax: 781 863-1739). Our

Web site http://www.nr.com has additional information.

• [“Immediate License”] If you are the individual owner of a copy of this book and

you type one or more of its routines into your computer, we authorize you to use

them on that computer for your own personal and noncommercial purposes. You

are not authorized to transfer or distribute machine-readable copies to any other

person, or to use the routines on more than one machine, or to distribute executable

programs containing our routines. This is the only free license.

• [“Single-Screen License”] This is the most common type of low-cost license, with

terms governed by our Single Screen (Shrinkwrap) License document (complete

terms available through our Web site). Basically, this license lets you use Numerical

Recipes routines on any one screen (PC, workstation, X-terminal, etc.). You may

also, under this license, transfer pre-compiled, executable programs incorporating

our routines to other, unlicensed, screens or computers, providing that (i) your

application is noncommercial (i.e., does not involve the selling of your program

for a fee), (ii) the programs were first developed, compiled, and successfully run

on a licensed screen, and (iii) our routines are bound into the programs in such a

manner that they cannot be accessed as individual routines and cannot practicably

be unbound and used in other programs. That is, under this license, your program

user must not be able to use our programs as part of a program library or “mix-andmatch” workbench. Conditions for other types of commercial or noncommercial

distribution may be found on our Web site (http://www.nr.com).

• [“Multi-Screen, Server, Site, and Corporate Licenses”] The terms of the Single

Screen License can be extended to designated groups of machines, defined by

number of screens, number of machines, locations, or ownership. Significant

discounts from the corresponding single-screen prices are available when the

estimated number of screens exceeds 40. Contact Numerical Recipes Software

(email: orders@nr.com or fax: 781 863-1739) for details.

• [“Course Right-to-Copy License”] Instructors at accredited educational institutions

who have adopted this book for a course, and who have already purchased a Single

Screen License (either acquired with the purchase of media, or from the Numerical

Recipes On-Line Software Store), may license the programs for use in that course

as follows: Mail your name, title, and address; the course name, number, dates,

and estimated enrollment; and advance payment of $5 per (estimated) student

to Numerical Recipes Software, at this address: P.O. Box 243, Cambridge, MA

02238 (USA). You will receive by return mail a license authorizing you to make

copies of the programs for use by your students, and/or to transfer the programs to

a machine accessible to your students (but only for the duration of the course).

About Copyrights on Computer Programs

Like artistic or literary compositions, computer programs are protected by

copyright. Generally it is an infringement for you to copy into your computer a

program from a copyrighted source. (It is also not a friendly thing to do, since it

deprives the program’s author of compensation for his or her creative effort.) Under

xviii

License Information

copyright law, all “derivative works” (modified versions, or translations into another

computer language) also come under the same copyright as the original work.

Copyright does not protect ideas, but only the expression of those ideas in

a particular form. In the case of a computer program, the ideas consist of the

program’s methodology and algorithm, including the necessary sequence of steps

adopted by the programmer. The expression of those ideas is the program source

code (particularly any arbitrary or stylistic choices embodied in it), its derived object

code, and any other derivative works.

If you analyze the ideas contained in a program, and then express those

ideas in your own completely different implementation, then that new program

implementation belongs to you. That is what we have done for those programs in

this book that are not entirely of our own devising. When programs in this book are

said to be “based” on programs published in copyright sources, we mean that the

ideas are the same. The expression of these ideas as source code is our own. We

believe that no material in this book infringes on an existing copyright.

Trademarks

Several registered trademarks appear within the text of this book: Sun is a

trademark of Sun Microsystems, Inc. SPARC and SPARCstation are trademarks of

SPARC International, Inc. Microsoft, Windows 95, Windows NT, PowerStation,

and MS are trademarks of Microsoft Corporation. DEC, VMS, Alpha AXP, and

ULTRIX are trademarks of Digital Equipment Corporation. IBM is a trademark of

International Business Machines Corporation. Apple and Macintosh are trademarks

of Apple Computer, Inc. UNIX is a trademark licensed exclusively through X/Open

Co. Ltd. IMSL is a trademark of Visual Numerics, Inc. NAG refers to proprietary

computer software of Numerical Algorithms Group (USA) Inc. PostScript and

Adobe Illustrator are trademarks of Adobe Systems Incorporated. Last, and no doubt

least, Numerical Recipes (when identifying products) is a trademark of Numerical

Recipes Software.

Attributions

The fact that ideas are legally “free as air” in no way supersedes the ethical

requirement that ideas be credited to their known originators. When programs in

this book are based on known sources, whether copyrighted or in the public domain,

published or “handed-down,” we have attempted to give proper attribution. Unfortunately, the lineage of many programs in common circulation is often unclear. We

would be grateful to readers for new or corrected information regarding attributions,

which we will attempt to incorporate in subsequent printings.

Computer Programs

by Chapter and Section

1.0

1.1

1.1

1.1

flmoon

julday

badluk

caldat

calculate phases of the moon by date

Julian Day number from calendar date

Friday the 13th when the moon is full

calendar date from Julian day number

2.1

gaussj

2.3

2.3

2.4

2.4

2.4

2.4

2.5

2.6

2.6

2.6

2.7

2.7

2.7

2.7

2.7

2.7

2.7

2.7

2.7

2.7

2.7

2.8

2.8

2.9

2.9

2.10

2.10

2.10

2.10

2.10

ludcmp

lubksb

tridag

banmul

bandec

banbks

mprove

svbksb

svdcmp

pythag

cyclic

sprsin

sprsax

sprstx

sprstp

sprspm

sprstm

linbcg

snrm

atimes

asolve

vander

toeplz

choldc

cholsl

qrdcmp

qrsolv

rsolv

qrupdt

rotate

Gauss-Jordan matrix inversion and linear equation

solution

linear equation solution, LU decomposition

linear equation solution, backsubstitution

solution of tridiagonal systems

multiply vector by band diagonal matrix

band diagonal systems, decomposition

band diagonal systems, backsubstitution

linear equation solution, iterative improvement

singular value backsubstitution

singular value decomposition of a matrix

calculate (a2 + b2 )1/2 without overflow

solution of cyclic tridiagonal systems

convert matrix to sparse format

product of sparse matrix and vector

product of transpose sparse matrix and vector

transpose of sparse matrix

pattern multiply two sparse matrices

threshold multiply two sparse matrices

biconjugate gradient solution of sparse systems

used by linbcg for vector norm

used by linbcg for sparse multiplication

used by linbcg for preconditioner

solve Vandermonde systems

solve Toeplitz systems

Cholesky decomposition

Cholesky backsubstitution

QR decomposition

QR backsubstitution

right triangular backsubstitution

update a QR decomposition

Jacobi rotation used by qrupdt

3.1

3.2

3.3

3.3

3.4

polint

ratint

spline

splint

locate

polynomial interpolation

rational function interpolation

construct a cubic spline

cubic spline interpolation

search an ordered table by bisection

xix

xx

Computer Programs by Chapter and Section

3.4

3.5

3.5

3.6

3.6

3.6

3.6

3.6

hunt

polcoe

polcof

polin2

bcucof

bcuint

splie2

splin2

search a table when calls are correlated

polynomial coefficients from table of values

polynomial coefficients from table of values

two-dimensional polynomial interpolation

construct two-dimensional bicubic

two-dimensional bicubic interpolation

construct two-dimensional spline

two-dimensional spline interpolation

4.2

4.2

4.2

4.3

4.4

4.4

4.4

4.4

4.4

4.4

4.5

4.5

4.5

4.5

4.5

4.5

4.5

4.6

trapzd

qtrap

qsimp

qromb

midpnt

qromo

midinf

midsql

midsqu

midexp

qgaus

gauleg

gaulag

gauher

gaujac

gaucof

orthog

quad3d

trapezoidal rule

integrate using trapezoidal rule

integrate using Simpson’s rule

integrate using Romberg adaptive method

extended midpoint rule

integrate using open Romberg adaptive method

integrate a function on a semi-infinite interval

integrate a function with lower square-root singularity

integrate a function with upper square-root singularity

integrate a function that decreases exponentially

integrate a function by Gaussian quadratures

Gauss-Legendre weights and abscissas

Gauss-Laguerre weights and abscissas

Gauss-Hermite weights and abscissas

Gauss-Jacobi weights and abscissas

quadrature weights from orthogonal polynomials

construct nonclassical orthogonal polynomials

integrate a function over a three-dimensional space

5.1

5.3

5.3

5.3

5.7

5.8

5.8

5.9

5.9

5.10

5.10

5.11

5.12

5.13

eulsum

ddpoly

poldiv

ratval

dfridr

chebft

chebev

chder

chint

chebpc

pcshft

pccheb

pade

ratlsq

sum a series by Euler–van Wijngaarden algorithm

evaluate a polynomial and its derivatives

divide one polynomial by another

evaluate a rational function

numerical derivative by Ridders’ method

fit a Chebyshev polynomial to a function

Chebyshev polynomial evaluation

derivative of a function already Chebyshev fitted

integrate a function already Chebyshev fitted

polynomial coefficients from a Chebyshev fit

polynomial coefficients of a shifted polynomial

inverse of chebpc; use to economize power series

Pad?e approximant from power series coefficients

rational fit by least-squares method

6.1

6.1

6.1

6.1

gammln

factrl

bico

factln

logarithm of gamma function

factorial function

binomial coefficients function

logarithm of factorial function

Computer Programs by Chapter and Section

xxi

6.1

6.2

6.2

6.2

6.2

6.2

6.2

6.2

6.3

6.3

6.4

6.4

6.5

6.5

6.5

6.5

6.5

6.5

6.6

6.6

6.6

6.6

6.6

6.6

6.7

6.7

6.7

6.7

6.7

6.8

6.9

6.9

6.10

6.11

6.11

6.11

6.11

6.11

6.11

6.11

6.11

6.12

6.12

6.12

beta

gammp

gammq

gser

gcf

erff

erffc

erfcc

expint

ei

betai

betacf

bessj0

bessy0

bessj1

bessy1

bessy

bessj

bessi0

bessk0

bessi1

bessk1

bessk

bessi

bessjy

beschb

bessik

airy

sphbes

plgndr

frenel

cisi

dawson

rf

rd

rj

rc

ellf

elle

ellpi

sncndn

hypgeo

hypser

hypdrv

beta function

incomplete gamma function

complement of incomplete gamma function

series used by gammp and gammq

continued fraction used by gammp and gammq

error function

complementary error function

complementary error function, concise routine

exponential integral En

exponential integral Ei

incomplete beta function

continued fraction used by betai

Bessel function J0

Bessel function Y0

Bessel function J1

Bessel function Y1

Bessel function Y of general integer order

Bessel function J of general integer order

modified Bessel function I0

modified Bessel function K0

modified Bessel function I1

modified Bessel function K1

modified Bessel function K of integer order

modified Bessel function I of integer order

Bessel functions of fractional order

Chebyshev expansion used by bessjy

modified Bessel functions of fractional order

Airy functions

spherical Bessel functions jn and yn

Legendre polynomials, associated (spherical harmonics)

Fresnel integrals S(x) and C(x)

cosine and sine integrals Ci and Si

Dawson’s integral

Carlson’s elliptic integral of the first kind

Carlson’s elliptic integral of the second kind

Carlson’s elliptic integral of the third kind

Carlson’s degenerate elliptic integral

Legendre elliptic integral of the first kind

Legendre elliptic integral of the second kind

Legendre elliptic integral of the third kind

Jacobian elliptic functions

complex hypergeometric function

complex hypergeometric function, series evaluation

complex hypergeometric function, derivative of

7.1

7.1

ran0

ran1

random deviate by Park and Miller minimal standard

random deviate, minimal standard plus shuffle

xxii

Computer Programs by Chapter and Section

7.1

7.1

7.2

7.2

7.3

7.3

7.3

7.4

7.4

7.5

7.5

7.7

7.8

7.8

7.8

7.8

ran2

ran3

expdev

gasdev

gamdev

poidev

bnldev

irbit1

irbit2

psdes

ran4

sobseq

vegas

rebin

miser

ranpt

random deviate by L’Ecuyer long period plus shuffle

random deviate by Knuth subtractive method

exponential random deviates

normally distributed random deviates

gamma-law distribution random deviates

Poisson distributed random deviates

binomial distributed random deviates

random bit sequence

random bit sequence

“pseudo-DES” hashing of 64 bits

random deviates from DES-like hashing

Sobol’s quasi-random sequence

adaptive multidimensional Monte Carlo integration

sample rebinning used by vegas

recursive multidimensional Monte Carlo integration

get random point, used by miser

8.1

8.1

8.1

8.2

8.2

8.3

8.4

8.4

8.4

8.5

8.5

8.5

8.6

8.6

piksrt

piksr2

shell

sort

sort2

hpsort

indexx

sort3

rank

select

selip

hpsel

eclass

eclazz

sort an array by straight insertion

sort two arrays by straight insertion

sort an array by Shell’s method

sort an array by quicksort method

sort two arrays by quicksort method

sort an array by heapsort method

construct an index for an array

sort, use an index to sort 3 or more arrays

construct a rank table for an array

find the N th largest in an array

find the N th largest, without altering an array

find M largest values, without altering an array

determine equivalence classes from list

determine equivalence classes from procedure

9.0

9.1

9.1

9.1

9.2

9.2

9.2

9.3

9.4

9.4

9.5

9.5

scrsho

zbrac

zbrak

rtbis

rtflsp

rtsec

zriddr

zbrent

rtnewt

rtsafe

laguer

zroots

9.5

9.5

zrhqr

qroot

graph a function to search for roots

outward search for brackets on roots

inward search for brackets on roots

find root of a function by bisection

find root of a function by false-position

find root of a function by secant method

find root of a function by Ridders’ method

find root of a function by Brent’s method

find root of a function by Newton-Raphson

find root of a function by Newton-Raphson and bisection

find a root of a polynomial by Laguerre’s method

roots of a polynomial by Laguerre’s method with

deflation

roots of a polynomial by eigenvalue methods

complex or double root of a polynomial, Bairstow

Computer Programs by Chapter and Section

xxiii

9.6

9.7

9.7

9.7

9.7

9.7

mnewt

lnsrch

newt

fdjac

fmin

broydn

Newton’s method for systems of equations

search along a line, used by newt

globally convergent multi-dimensional Newton’s method

finite-difference Jacobian, used by newt

norm of a vector function, used by newt

secant method for systems of equations

10.1

10.1

10.2

10.3

10.4

10.4

10.5

10.5

10.5

10.6

10.6

10.6

10.7

10.8

10.8

10.8

10.8

10.9

10.9

10.9

10.9

10.9

10.9

10.9

10.9

mnbrak

golden

brent

dbrent

amoeba

amotry

powell

linmin

f1dim

frprmn

dlinmin

df1dim

dfpmin

simplx

simp1

simp2

simp3

anneal

revcst

reverse

trncst

trnspt

metrop

amebsa

amotsa

bracket the minimum of a function

find minimum of a function by golden section search

find minimum of a function by Brent’s method

find minimum of a function using derivative information

minimize in N -dimensions by downhill simplex method

evaluate a trial point, used by amoeba

minimize in N -dimensions by Powell’s method

minimum of a function along a ray in N -dimensions

function used by linmin

minimize in N -dimensions by conjugate gradient

minimum of a function along a ray using derivatives

function used by dlinmin

minimize in N -dimensions by variable metric method

linear programming maximization of a linear function

linear programming, used by simplx

linear programming, used by simplx

linear programming, used by simplx

traveling salesman problem by simulated annealing

cost of a reversal, used by anneal

do a reversal, used by anneal

cost of a transposition, used by anneal

do a transposition, used by anneal

Metropolis algorithm, used by anneal

simulated annealing in continuous spaces

evaluate a trial point, used by amebsa

11.1

11.1

11.2

11.3

11.5

11.5

11.6

jacobi

eigsrt

tred2

tqli

balanc

elmhes

hqr

eigenvalues and eigenvectors of a symmetric matrix

eigenvectors, sorts into order by eigenvalue

Householder reduction of a real, symmetric matrix

eigensolution of a symmetric tridiagonal matrix

balance a nonsymmetric matrix

reduce a general matrix to Hessenberg form

eigenvalues of a Hessenberg matrix

12.2

12.3

12.3

12.3

12.3

12.3

four1

twofft

realft

sinft

cosft1

cosft2

fast Fourier transform (FFT) in one dimension

fast Fourier transform of two real functions

fast Fourier transform of a single real function

fast sine transform

fast cosine transform with endpoints

“staggered” fast cosine transform

xxiv

Computer Programs by Chapter and Section

12.4

12.5

12.6

12.6

fourn

rlft3

fourfs

fourew

fast Fourier transform in multidimensions

FFT of real data in two or three dimensions

FFT for huge data sets on external media

rewind and permute files, used by fourfs

13.1

13.2

13.4

13.6

13.6

13.6

13.7

13.8

13.8

13.8

13.9

13.9

13.10

13.10

13.10

13.10

13.10

convlv

correl

spctrm

memcof

fixrts

predic

evlmem

period

fasper

spread

dftcor

dftint

wt1

daub4

pwtset

pwt

wtn

convolution or deconvolution of data using FFT

correlation or autocorrelation of data using FFT

power spectrum estimation using FFT

evaluate maximum entropy (MEM) coefficients

reflect roots of a polynomial into unit circle

linear prediction using MEM coefficients

power spectral estimation from MEM coefficients

power spectrum of unevenly sampled data

power spectrum of unevenly sampled larger data sets

extirpolate value into array, used by fasper

compute endpoint corrections for Fourier integrals

high-accuracy Fourier integrals

one-dimensional discrete wavelet transform

Daubechies 4-coefficient wavelet filter

initialize coefficients for pwt

partial wavelet transform

multidimensional discrete wavelet transform

14.1

14.2

14.2

14.2

14.2

14.2

14.3

14.3

14.3

14.3

14.3

14.4

14.4

14.5

14.6

14.6

14.6

14.6

14.7

14.7

14.7

14.7

14.8

moment

ttest

avevar

tutest

tptest

ftest

chsone

chstwo

ksone

kstwo

probks

cntab1

cntab2

pearsn

spear

crank

kendl1

kendl2

ks2d1s

quadct

quadvl

ks2d2s

savgol

calculate moments of a data set

Student’s t-test for difference of means

calculate mean and variance of a data set

Student’s t-test for means, case of unequal variances

Student’s t-test for means, case of paired data

F -test for difference of variances

chi-square test for difference between data and model

chi-square test for difference between two data sets

Kolmogorov-Smirnov test of data against model

Kolmogorov-Smirnov test between two data sets

Kolmogorov-Smirnov probability function

contingency table analysis using chi-square

contingency table analysis using entropy measure

Pearson’s correlation between two data sets

Spearman’s rank correlation between two data sets

replaces array elements by their rank

correlation between two data sets, Kendall’s tau

contingency table analysis using Kendall’s tau

K–S test in two dimensions, data vs. model

count points by quadrants, used by ks2d1s

quadrant probabilities, used by ks2d1s

K–S test in two dimensions, data vs. data

Savitzky-Golay smoothing coefficients

Computer Programs by Chapter and Section

xxv

15.2

15.3

15.3

15.4

15.4

15.4

15.4

15.4

15.4

15.5

15.5

15.5

15.7

15.7

fit

fitexy

chixy

lfit

covsrt

svdfit

svdvar

fpoly

fleg

mrqmin

mrqcof

fgauss

medfit

rofunc

least-squares fit data to a straight line

fit data to a straight line, errors in both x and y

used by fitexy to calculate a ?2

general linear least-squares fit by normal equations

rearrange covariance matrix, used by lfit

linear least-squares fit by singular value decomposition

variances from singular value decomposition

fit a polynomial using lfit or svdfit

fit a Legendre polynomial using lfit or svdfit

nonlinear least-squares fit, Marquardt’s method

used by mrqmin to evaluate coefficients

fit a sum of Gaussians using mrqmin

fit data to a straight line robustly, least absolute deviation

fit data robustly, used by medfit

16.1

16.1

16.2

16.2

16.2

16.3

16.4

16.4

16.4

16.5

16.6

16.6

16.6

16.6

16.6

rk4

rkdumb

rkqs

rkck

odeint

mmid

bsstep

pzextr

rzextr

stoerm

stiff

jacobn

derivs

simpr

stifbs

integrate one step of ODEs, fourth-order Runge-Kutta

integrate ODEs by fourth-order Runge-Kutta

integrate one step of ODEs with accuracy monitoring

Cash-Karp-Runge-Kutta step used by rkqs

integrate ODEs with accuracy monitoring

integrate ODEs by modified midpoint method

integrate ODEs, Bulirsch-Stoer step

polynomial extrapolation, used by bsstep

rational function extrapolation, used by bsstep

integrate conservative second-order ODEs

integrate stiff ODEs by fourth-order Rosenbrock

sample Jacobian routine for stiff

sample derivatives routine for stiff

integrate stiff ODEs by semi-implicit midpoint rule

integrate stiff ODEs, Bulirsch-Stoer step

17.1

17.2

17.3

17.3

17.3

17.3

17.4

17.4

17.4

17.4

shoot

shootf

solvde

bksub

pinvs

red

sfroid

difeq

sphoot

sphfpt

solve two point boundary value problem by shooting

ditto, by shooting to a fitting point

two point boundary value problem, solve by relaxation

backsubstitution, used by solvde

diagonalize a sub-block, used by solvde

reduce columns of a matrix, used by solvde

spheroidal functions by method of solvde

spheroidal matrix coefficients, used by sfroid

spheroidal functions by method of shoot

spheroidal functions by method of shootf

18.1

18.1

18.2

18.3

18.3

fred2

fredin

voltra

wwghts

kermom

solve linear Fredholm equations of the second kind

interpolate solutions obtained with fred2

linear Volterra equations of the second kind

quadrature weights for an arbitrarily singular kernel

sample routine for moments of a singular kernel

xxvi

Computer Programs by Chapter and Section

18.3

18.3

quadmx

fredex

sample routine for a quadrature matrix

example of solving a singular Fredholm equation

19.5

19.6

19.6

19.6

19.6

19.6

19.6

19.6

19.6

19.6

19.6

19.6

19.6

19.6

19.6

19.6

19.6

sor

mglin

rstrct

interp

addint

slvsml

relax

resid

copy

fill0

mgfas

relax2

slvsm2

lop

matadd

matsub

anorm2

elliptic PDE solved by successive overrelaxation method

linear elliptic PDE solved by multigrid method

half-weighting restriction, used by mglin, mgfas

bilinear prolongation, used by mglin, mgfas

interpolate and add, used by mglin

solve on coarsest grid, used by mglin

Gauss-Seidel relaxation, used by mglin

calculate residual, used by mglin

utility used by mglin, mgfas

utility used by mglin

nonlinear elliptic PDE solved by multigrid method

Gauss-Seidel relaxation, used by mgfas

solve on coarsest grid, used by mgfas

applies nonlinear operator, used by mgfas

utility used by mgfas

utility used by mgfas

utility used by mgfas

20.1

20.2

20.3

20.3

20.3

20.4

20.4

20.4

20.4

20.5

20.5

20.5

20.6

20.6

20.6

20.6

20.6

20.6

20.6

machar

igray

icrc1

icrc

decchk

hufmak

hufapp

hufenc

hufdec

arcmak

arcode

arcsum

mpops

mpmul

mpinv

mpdiv

mpsqrt

mp2dfr

mppi

diagnose computer’s floating arithmetic

Gray code and its inverse

cyclic redundancy checksum, used by icrc

cyclic redundancy checksum

decimal check digit calculation or verification

construct a Huffman code

append bits to a Huffman code, used by hufmak

use Huffman code to encode and compress a character

use Huffman code to decode and decompress a character

construct an arithmetic code

encode or decode a character using arithmetic coding

add integer to byte string, used by arcode

multiple precision arithmetic, simpler operations

multiple precision multiply, using FFT methods

multiple precision reciprocal

multiple precision divide and remainder

multiple precision square root

multiple precision conversion to decimal base

multiple precision example, compute many digits of ?

Chapter 1.

Preliminaries

1.0 Introduction

This book, like its predecessor edition, is supposed to teach you methods of

numerical computing that are practical, efficient, and (insofar as possible) elegant.

We presume throughout this book that you, the reader, have particular tasks that you

want to get done. We view our job as educating you on how to proceed. Occasionally

we may try to reroute you briefly onto a particularly beautiful side road; but by and

large, we will guide you along main highways that lead to practical destinations.

Throughout this book, you will find us fearlessly editorializing, telling you

what you should and shouldn’t do. This prescriptive tone results from a conscious

decision on our part, and we hope that you will not find it irritating. We do not

claim that our advice is infallible! Rather, we are reacting against a tendency, in

the textbook literature of computation, to discuss every possible method that has

ever been invented, without ever offering a practical judgment on relative merit. We

do, therefore, offer you our practical judgments whenever we can. As you gain

experience, you will form your own opinion of how reliable our advice is.

We presume that you are able to read computer programs in C, that being

the language of this version of Numerical Recipes (Second Edition). The book

Numerical Recipes in FORTRAN (Second Edition) is separately available, if you

prefer to program in that language. Earlier editions of Numerical Recipes in Pascal

and Numerical Recipes Routines and Examples in BASIC are also available; while

not containing the additional material of the Second Edition versions in C and

FORTRAN, these versions are perfectly serviceable if Pascal or BASIC is your

language of choice.

When we include programs in the text, they look like this:

#include

#define RAD (3.14159265/180.0)

void flmoon(int n, int nph, long *jd, float *frac)

Our programs begin with an introductory comment summarizing their purpose and explaining

their calling sequence. This routine calculates the phases of the moon. Given an integer n and

a code nph for the phase desired (nph = 0 for new moon, 1 for ?rst quarter, 2 for full, 3 for last

quarter), the routine returns the Julian Day Number jd, and the fractional part of a day frac

to be added to it, of the nth such phase since January, 1900. Greenwich Mean Time is assumed.

{

void nrerror(char error_text[]);

int i;

float am,as,c,t,t2,xtra;

This is how we comment an individual

line.

c=n+nph/4.0;

1

2

Chapter 1.

Preliminaries

t=c/1236.85;

t2=t*t;

as=359.2242+29.105356*c;

You aren’t really intended to understand

am=306.0253+385.816918*c+0.010730*t2;

this algorithm, but it does work!

*jd=2415020+28L*n+7L*nph;

xtra=0.75933+1.53058868*c+((1.178e-4)-(1.55e-7)*t)*t2;

if (nph == 0 || nph == 2)

xtra += (0.1734-3.93e-4*t)*sin(RAD*as)-0.4068*sin(RAD*am);

else if (nph == 1 || nph == 3)

xtra += (0.1721-4.0e-4*t)*sin(RAD*as)-0.6280*sin(RAD*am);

else nrerror("nph is unknown in flmoon");

This is how we will indicate error

i=(int)(xtra >= 0.0 ? floor(xtra) : ceil(xtra-1.0));

conditions.

*jd += i;

*frac=xtra-i;

}

If the syntax of the function definition above looks strange to you, then you are

probably used to the older Kernighan and Ritchie (“K&R”) syntax, rather than that of

the newer ANSI C. In this edition, we adopt ANSI C as our standard. You might want

to look ahead to §1.2 where ANSI C function prototypes are discussed in more detail.

Note our convention of handling all errors and exceptional cases with a statement

like nrerror("some error message");. The function nrerror() is part of a

small file of utility programs, nrutil.c, listed in Appendix B at the back of the

book. This Appendix includes a number of other utilities that we will describe later in

this chapter. Function nrerror() prints the indicated error message to your stderr

device (usually your terminal screen), and then invokes the function exit(), which

terminates execution. The function exit() is in every C library we know of; but if

you find it missing, you can modify nrerror() so that it does anything else that will

halt execution. For example, you can have it pause for input from the keyboard, and

then manually interrupt execution. In some applications, you will want to modify

nrerror() to do more sophisticated error handling, for example to transfer control

somewhere else, with an error flag or error code set.

We will have more to say about the C programming language, its conventions

and style, in §1.1 and §1.2.

Computational Environment and Program Validation

Our goal is that the programs in this book be as portable as possible, across

different platforms (models of computer), across different operating systems, and

across different C compilers. C was designed with this type of portability in

mind. Nevertheless, we have found that there is no substitute for actually checking

all programs on a variety of compilers, in the process uncovering differences in

library structure or contents, and even occasional differences in allowed syntax. As

surrogates for the large number of possible combinations, we have tested all the

programs in this book on the combinations of machines, operating systems, and

compilers shown on the accompanying table. More generally, the programs should

run without modification on any compiler that implements the ANSI C standard,

as described for example in Harbison and Steele’s excellent book [1]. With small

modifications, our programs should run on any compiler that implements the older,

de facto K&R standard [2]. An example of the kind of trivial incompatibility to

watch out for is that ANSI C requires the memory allocation functions malloc()

3

1.0 Introduction

Tested Machines and Compilers

Hardware

O/S Version

Compiler Version

IBM PC compatible 486/33

MS-DOS 5.0/Windows 3.1

Microsoft C/C++ 7.0

IBM PC compatible 486/33

MS-DOS 5.0

Borland C/C++ 2.0

IBM RS/6000

AIX 3.2

IBM xlc 1.02

DECstation 5000/25

ULTRIX 4.2A

CodeCenter (Saber) C 3.1.1

DECsystem 5400

ULTRIX 4.1

GNU C Compiler 2.1

Sun SPARCstation 2

SunOS 4.1

GNU C Compiler 1.40

DECstation 5000/200

ULTRIX 4.2

DEC RISC C 2.1*

Sun SPARCstation 2

SunOS 4.1

Sun cc 1.1*

*compiler version does not fully implement ANSI C; only K&R validated

and free() to be declared via the header stdlib.h; some older compilers require

them to be declared with the header file malloc.h, while others regard them as

inherent in the language and require no header file at all.

In validating the programs, we have taken the program source code directly

from the machine-readable form of the book’s manuscript, to decrease the chance

of propagating typographical errors. “Driver” or demonstration programs that we

used as part of our validations are available separately as the Numerical Recipes

Example Book (C), as well as in machine-readable form. If you plan to use more

than a few of the programs in this book, or if you plan to use programs in this book

on more than one different computer, then you may find it useful to obtain a copy

of these demonstration programs.

Of course we would be foolish to claim that there are no bugs in our programs,

and we do not make such a claim. We have been very careful, and have benefitted

from the experience of the many readers who have written to us. If you find a new

bug, please document it and tell us!

Compatibility with the First Edition

If you are accustomed to the Numerical Recipes routines of the First Edition, rest

assured: almost all of them are still here, with the same names and functionalities,

often with major improvements in the code itself. In addition, we hope that you

will soon become equally familiar with the added capabilities of the more than 100

routines that are new to this edition.

We have retired a small number of First Edition routines, those that we believe

to be clearly dominated by better methods implemented in this edition. A table,

following, lists the retired routines and suggests replacements.

First Edition users should also be aware that some routines common to

both editions have alterations in their calling interfaces, so are not directly “plug

compatible.” A fairly complete list is: chsone, chstwo, covsrt, dfpmin, laguer,

lfit, memcof, mrqcof, mrqmin, pzextr, ran4, realft, rzextr, shoot, shootf.

There may be others (depending in part on which printing of the First Edition is taken

for the comparison). If you have written software of any appreciable complexity

4

Chapter 1.

Preliminaries

Previous Routines Omitted from This Edition

Name(s)

Replacement(s)

Comment

adi

mglin or mgfas

better method

cosft

cosft1 or cosft2

choice of boundary conditions

cel, el2

rf, rd, rj, rc

better algorithms

des, desks

ran4 now uses psdes

was too slow

mdian1, mdian2

select, selip

more general

qcksrt

sort

name change (sort is now hpsort)

rkqc

rkqs

better method

smooft

use convlv with coefficients from savgol

sparse

linbcg

more general

that is dependent on First Edition routines, we do not recommend blindly replacing

them by the corresponding routines in this book. We do recommend that any new

programming efforts use the new routines.

About References

You will find references, and suggestions for further reading, listed at the

end of most sections of this book. References are cited in the text by bracketed

numbers like this [3].

Because computer algorithms often circulate informally for quite some time

before appearing in a published form, the task of uncovering “primary literature”

is sometimes quite difficult. We have not attempted this, and we do not pretend

to any degree of bibliographical completeness in this book. For topics where a

substantial secondary literature exists (discussion in textbooks, reviews, etc.) we

have consciously limited our references to a few of the more useful secondary

sources, especially those with good references to the primary literature. Where the

existing secondary literature is insufficient, we give references to a few primary

sources that are intended to serve as starting points for further reading, not as

complete bibliographies for the field.

The order in which references are listed is not necessarily significant. It reflects a

compromise between listing cited references in the order cited, and listing suggestions

for further reading in a roughly prioritized order, with the most useful ones first.

The remaining three sections of this chapter review some basic concepts of

programming (control structures, etc.), discuss a set of conventions specific to C

that we have adopted in this book, and introduce some fundamental concepts in

numerical analysis (roundoff error, etc.). Thereafter, we plunge into the substantive

material of the book.

CITED REFERENCES AND FURTHER READING:

Harbison, S.P., and Steele, G.L., Jr. 1991, C: A Reference Manual, 3rd ed. (Englewood Cliffs,

NJ: Prentice-Hall). [1]

1.1 Program Organization and Control Structures

5

Kernighan, B., and Ritchie, D. 1978, The C Programming Language (Englewood Cliffs, NJ:

Prentice-Hall). [2] [Reference for K&R “traditional” C. Later editions of this book conform

to the ANSI C standard.]

Meeus, J. 1982, Astronomical Formulae for Calculators, 2nd ed., revised and enlarged (Richmond, VA: Willmann-Bell). [3]

1.1 Program Organization and Control

Structures

We sometimes like to point out the close analogies between computer programs,

on the one hand, and written poetry or written musical scores, on the other. All

three present themselves as visual media, symbols on a two-dimensional page or

computer screen. Yet, in all three cases, the visual, two-dimensional, frozen-in-time

representation communicates (or is supposed to communicate) something rather

different, namely a process that unfolds in time. A poem is meant to be read; music,

played; a program, executed as a sequential series of computer instructions.

In all three cases, the target of the communication, in its visual form, is a human

being. The goal is to transfer to him/her, as efficiently as can be accomplished,

the greatest degree of understanding, in advance, of how the process will unfold in

time. In poetry, this human target is the reader. In music, it is the performer. In

programming, it is the program user.

Now, you may object that the target of communication of a program is not

a human but a computer, that the program user is only an irrelevant intermediary,

a lackey who feeds the machine. This is perhaps the case in the situation where

the business executive pops a diskette into a desktop computer and feeds that

computer a black-box program in binary executable form. The computer, in this

case, doesn’t much care whether that program was written with “good programming

practice” or not.

We envision, however, that you, the readers of this book, are in quite a different

situation. You need, or want, to know not just what a program does, but also how

it does it, so that you can tinker with it and modify it to your particular application.

You need others to be able to see what you have done, so that they can criticize or

admire. In such cases, where the desired goal is maintainable or reusable code, the

targets of a program’s communication are surely human, not machine.

One key to achieving good programming practice is to recognize that programming, music, and poetry — all three being symbolic constructs of the human

brain — are naturally structured into hierarchies that have many different nested

levels. Sounds (phonemes) form small meaningful units (morphemes) which in turn

form words; words group into phrases, which group into sentences; sentences make

paragraphs, and these are organized into higher levels of meaning. Notes form

musical phrases, which form themes, counterpoints, harmonies, etc.; which form

movements, which form concertos, symphonies, and so on.

The structure in programs is equally hierarchical. Appropriately, good programming practice brings different techniques to bear on the different levels [1-3].

At a low level is the ascii character set. Then, constants, identifiers, operands,

6

Chapter 1.

Preliminaries

operators. Then program statements, like a[j+1]=b+c/3.0;. Here, the best programming advice is simply be clear, or (correspondingly) don’t be too tricky. You

might momentarily be proud of yourself at writing the single line

k=(2-j)*(1+3*j)/2;

if you want to permute cyclically one of the values j = (0, 1, 2) into respectively

k = (1, 2, 0). You will regret it later, however, when you try to understand that

line. Better, and likely also faster, is

k=j+1;

if (k == 3) k=0;

Many programming stylists would even argue for the ploddingly literal

switch (j) {

case 0: k=1; break;

case 1: k=2; break;

case 2: k=0; break;

default: {

fprintf(stderr,"unexpected value for j");

exit(1);

}

}

on the grounds that it is both clear and additionally safeguarded from wrong assumptions about the possible values of j. Our preference among the implementations

is for the middle one.

In this simple example, we have in fact traversed several levels of hierarchy:

Statements frequently come in “groups” or “blocks” which make sense only taken

as a whole. The middle fragment above is one example. Another is

swap=a[j];

a[j]=b[j];

b[j]=swap;

which makes immediate sense to any programmer as the exchange of two variables,

while

ans=sum=0.0;

n=1;

is very likely to be an initialization of variables prior to some iterative process. This

level of hierarchy in a program is usually evident to the eye. It is good programming

practice to put in comments at this level, e.g., “initialize” or “exchange variables.”

The next level is that of control structures. These are things like the switch

construction in the example above, for loops, and so on. This level is sufficiently

important, and relevant to the hierarchical level of the routines in this book, that

we will come back to it just below.

At still higher levels in the hierarchy, we have functions and modules, and the

whole “global” organization of the computational task to be done. In the musical

analogy, we are now at the level of movements and complete works. At these levels,

1.1 Program Organization and Control Structures

7

modularization and encapsulation become important programming concepts, the

general idea being that program units should interact with one another only through

clearly defined and narrowly circumscribed interfaces. Good modularization practice

is an essential prerequisite to the success of large, complicated software projects,

especially those employing the efforts of more than one programmer. It is also good

practice (if not quite as essential) in the less massive programming tasks that an

individual scientist, or reader of this book, encounters.

Some computer languages, such as Modula-2 and C++, promote good modularization with higher-level language constructs absent in C. In Modula-2, for example,

functions, type definitions, and data structures can be encapsulated into “modules”

that communicate through declared public interfaces and whose internal workings

are hidden from the rest of the program [4]. In the C++ language, the key concept

is “class,” a user-definable generalization of data type that provides for data hiding,

automatic initialization of data, memory management, dynamic typing, and operator

overloading (i.e., the user-definable extension of operators like + and * so as to be

appropriate to operands in any particular class) [5]. Properly used in defining the data

structures that are passed between program units, classes can clarify and circumscribe

these units’ public interfaces, reducing the chances of programming error and also

allowing a considerable degree of compile-time and run-time error checking.

Beyond modularization, though depending on it, lie the concepts of objectoriented programming. Here a programming language, such as C++ or Turbo Pascal

5.5 [6], allows a module’s public interface to accept redefinitions of types or actions,

and these redefinitions become shared all the way down through the module’s

hierarchy (so-called polymorphism). For example, a routine written to invert a

matrix of real numbers could — dynamically, at run time — be made able to handle

complex numbers by overloading complex data types and corresponding definitions

of the arithmetic operations. Additional concepts of inheritance (the ability to define

a data type that “inherits” all the structure of another type, plus additional structure

of its own), and object extensibility (the ability to add functionality to a module

without access to its source code, e.g., at run time), also come into play.

We have not attempted to modularize, or make objects out of, the routines in

this book, for at least two reasons. First, the chosen language, C, does not really make

this possible. Second, we envision that you, the reader, might want to incorporate

the algorithms in this book, a few at a time, into modules or objects with a structure

of your own choosing. There does not exist, at present, a standard or accepted set

of “classes” for scientific object-oriented computing. While we might have tried to

invent such a set, doing so would have inevitably tied the algorithmic content of the

book (which is its raison d’?etre) to some rather specific, and perhaps haphazard, set

of choices regarding class definitions.

On the other hand, we are not unfriendly to the goals of modular and objectoriented programming. Within the limits of C, we have therefore tried to structure

our programs to be “object friendly.” That is one reason we have adopted ANSI

C with its function prototyping as our default C dialect (see §1.2). Also, within

our implementation sections, we have paid particular attention to the practices of

structured programming, as we now discuss.

8

Chapter 1.

Preliminaries

Control Structures

An executing program unfolds in time, but not strictly in the linear order in

which the statements are written. Program statements that affect the order in which

statements are executed, or that affect whether statements are executed, are called

control statements. Control statements never make useful sense by themselves. They

make sense only in the context of the groups or blocks of statements that they in turn

control. If you think of those blocks as paragraphs containing sentences, then the

control statements are perhaps best thought of as the indentation of the paragraph

and the punctuation between the sentences, not the words within the sentences.

We can now say what the goal of structured programming is. It is to make

program control manifestly apparent in the visual presentation of the program. You

see that this goal has nothing at all to do with how the computer sees the program.

As already remarked, computers don’t care whether you use structured programming

or not. Human readers, however, do care. You yourself will also care, once you

discover how much easier it is to perfect and debug a well-structured program than

one whose control structure is obscure.

You accomplish the goals of structured programming in two complementary

ways. First, you acquaint yourself with the small number of essential control

structures that occur over and over again in programming, and that are therefore

given convenient representations in most programming languages. You should learn

to think about your programming tasks, insofar as possible, exclusively in terms of

these standard control structures. In writing programs, you should get into the habit

of representing these standard control structures in consistent, conventional ways.

“Doesn’t this inhibit creativity?” our students sometimes ask. Yes, just

as Mozart’s creativity was inhibited by the sonata form, or Shakespeare’s by the

metrical requirements of the sonnet. The point is that creativity, when it is meant to

communicate, does well under the inhibitions of appropriate restrictions on format.

Second, you avoid, insofar as possible, control statements whose controlled

blocks or objects are difficult to discern at a glance. This means, in practice, that you

must try to avoid named labels on statements and goto’s. It is not the goto’s that

are dangerous (although they do interrupt one’s reading of a program); the named

statement labels are the hazard. In fact, whenever you encounter a named statement

label while reading a program, you will soon become conditioned to get a sinking

feeling in the pit of your stomach. Why? Because the following questions will, by

habit, immediately spring to mind: Where did control come from in a branch to this

label? It could be anywhere in the routine! What circumstances resulted in a branch

to this label? They could be anything! Certainty becomes uncertainty, understanding

dissolves into a morass of possibilities.

Some examples are now in order to make these considerations more concrete

(see Figure 1.1.1).

Catalog of Standard Structures

Iteration.

In C, simple iteration is performed with a for loop, for example

for (j=2;j 3)

if (a > 3) b += 1;

else b -= 1;

/* questionable! */

As judged by the indentation used on successive lines, the intent of the writer of

this code is the following: ‘If b is greater than 3 and a is greater than 3, then

increment b. If b is not greater than 3, then decrement b.’ According to the rules

of C, however, the actual meaning is ‘If b is greater than 3, then evaluate a. If a is

greater than 3, then increment b, and if a is less than or equal to 3, decrement b.’ The

point is that an else clause is associated with the most recent open if statement,

no matter how you lay it out on the page. Such confusions in meaning are easily

resolved by the inclusion of braces. They may in some instances be technically

superfluous; nevertheless, they clarify your intent and improve the program. The

above fragment should be written as

if (b > 3) {

if (a > 3) b += 1;

} else {

b -= 1;

}

Here is a working program that consists dominantly of if control statements:

#include

#define IGREG (15+31L*(10+12L*1582))

Gregorian Calendar adopted Oct. 15, 1582.

long julday(int mm, int id, int iyyy)

In this routine julday returns the Julian Day Number that begins at noon of the calendar date

speci?ed by month mm, day id, and year iyyy, all integer variables. Positive year signi?es A.D.;

negative, B.C. Remember that the year after 1 B.C. was 1 A.D.

{

void nrerror(char error_text[]);

12

Chapter 1.

Preliminaries

long jul;

int ja,jy=iyyy,jm;

if (jy == 0) nrerror("julday: there is no year zero.");

if (jy < 0) ++jy;

if (mm > 2) {

Here is an example of a block IF-structure.

jm=mm+1;

} else {

--jy;

jm=mm+13;

}

jul = (long) (floor(365.25*jy)+floor(30.6001*jm)+id+1720995);

if (id+31L*(mm+12L*iyyy) >= IGREG) {

Test whether to change to Gregorian Calja=(int)(0.01*jy);

endar.

jul += 2-ja+(int) (0.25*ja);

}

return jul;

}

(Astronomers number each 24-hour period, starting and ending at noon, with

a unique integer, the Julian Day Number [7]. Julian Day Zero was a very long

time ago; a convenient reference point is that Julian Day 2440000 began at noon

of May 23, 1968. If you know the Julian Day Number that begins at noon of a

given calendar date, then the day of the week of that date is obtained by adding

1 and taking the result modulo base 7; a zero answer corresponds to Sunday, 1 to

Monday, . . . , 6 to Saturday.)

While iteration. Most languages (though not FORTRAN, incidentally) provide

for structures like the following C example:

while (n < 1000) {

n *= 2;

j += 1;

}

It is the particular feature of this structure that the control-clause (in this case

n < 1000) is evaluated before each iteration. If the clause is not true, the enclosed

statements will not be executed. In particular, if this code is encountered at a time

when n is greater than or equal to 1000, the statements will not even be executed once.

Do-While iteration. Companion to the while iteration is a related controlstructure that tests its control-clause at the end of each iteration. In C, it looks

like this:

do {

n *= 2;

j += 1;

} while (n < 1000);

In this case, the enclosed statements will be executed at least once, independent

of the initial value of n.

Break. In this case, you have a loop that is repeated indefinitely until some

condition tested somewhere in the middle of the loop (and possibly tested in more

1.1 Program Organization and Control Structures

13

than one place) becomes true. At that point you wish to exit the loop and proceed

with what comes after it. In C the structure is implemented with the simple break

statement, which terminates execution of the innermost for, while, do, or switch

construction and proceeds to the next sequential instruction. (In Pascal and standard

FORTRAN, this structure requires the use of statement labels, to the detriment of clear

programming.) A typical usage of the break statement is:

for(;;) {

[statements before the test]

if (...) break;

[statements after the test]

}

[next sequential instruction]

Here is a program that uses several different iteration structures. One of us was

once asked, for a scavenger hunt, to find the date of a Friday the 13th on which the

moon was full. This is a program which accomplishes that task, giving incidentally

all other Fridays the 13th as a by-product.

#include

#include

#define ZON -5.0

#define IYBEG 1900

#define IYEND 2000

Time zone ?5 is Eastern Standard Time.

The range of dates to be searched.

int main(void) /* Program badluk */

{

void flmoon(int n, int nph, long *jd, float *frac);

long julday(int mm, int id, int iyyy);

int ic,icon,idwk,im,iyyy,n;

float timzon = ZON/24.0,frac;

long jd,jday;

printf("\nFull moons on Friday the 13th from %5d to %5d\n",IYBEG,IYEND);

for (iyyy=IYBEG;iyyy= jd ? 1 : -1);

if (ic == (-icon)) break;

Another break, case of no match.

icon=ic;

n += ic;

}

}

}

}

}

return 0;

}

If you are merely curious, there were (or will be) occurrences of a full moon

on Friday the 13th (time zone GMT?5) on: 3/13/1903, 10/13/1905, 6/13/1919,

1/13/1922, 11/13/1970, 2/13/1987, 10/13/2000, 9/13/2019, and 8/13/2049.

Other “standard” structures.

Our advice is to avoid them. Every

programming language has some number of “goodies” that the designer just couldn’t

resist throwing in. They seemed like a good idea at the time. Unfortunately they

don’t stand the test of time! Your program becomes difficult to translate into other

languages, and difficult to read (because rarely used structures are unfamiliar to the

reader). You can almost always accomplish the supposed conveniences of these

structures in other ways.

In C, the most problematic control structure is the switch...case...default

construction (see Figure 1.1.1), which has historically been burdened by uncertainty,

from compiler to compiler, about what data types are allowed in its control expression.

Data types char and int are universally supported. For other data types, e.g., float

or double, the structure should be replaced by a more recognizable and translatable

if. . .else construction. ANSI C allows the control expression to be of type long,

but many older compilers do not.

The continue; construction, while benign, can generally be replaced by an

if construction with no loss of clarity.

About “Advanced Topics”

Material set in smaller type, like this, signals an “advanced topic,” either one outside of

the main argument of the chapter, or else one requiring of you more than the usual assumed

mathematical background, or else (in a few cases) a discussion that is more speculative or an

algorithm that is less well-tested. Nothing important will be lost if you skip the advanced

topics on a first reading of the book.

You may have noticed that, by its looping over the months and years, the program badluk

avoids using any algorithm for converting a Julian Day Number back into a calendar date. A

routine for doing just this is not very interesting structurally, but it is occasionally useful:

#include

#define IGREG 2299161

void caldat(long julian, int *mm, int *id, int *iyyy)

Inverse of the function julday given above. Here julian is input as a Julian Day Number,

and the routine outputs mm,id, and iyyy as the month, day, and year on which the speci?ed

Julian Day started at noon.

{

long ja,jalpha,jb,jc,jd,je;

1.2 Some C Conventions for Scientific Computing

15

if (julian >= IGREG) {

Cross-over to Gregorian Calendar produces this correcjalpha=(long)(((float) (julian-1867216)-0.25)/36524.25);

tion.

ja=julian+1+jalpha-(long) (0.25*jalpha);

} else if (julian < 0) {

Make day number positive by adding integer number of

ja=julian+36525*(1-julian/36525);

Julian centuries, then subtract them o?

} else

at the end.

ja=julian;

jb=ja+1524;

jc=(long)(6680.0+((float) (jb-2439870)-122.1)/365.25);

jd=(long)(365*jc+(0.25*jc));

je=(long)((jb-jd)/30.6001);

*id=jb-jd-(long) (30.6001*je);

*mm=je-1;

if (*mm > 12) *mm -= 12;

*iyyy=jc-4715;

if (*mm > 2) --(*iyyy);

if (*iyyy

The Art of Scientific Computing

Second Edition

William H. Press

Harvard-Smithsonian Center for Astrophysics

Saul A. Teukolsky

Department of Physics, Cornell University

William T. Vetterling

Polaroid Corporation

Brian P. Flannery

EXXON Research and Engineering Company

CAMBRIDGE UNIVERSITY PRESS

Cambridge New York Port Chester Melbourne Sydney

Published by the Press Syndicate of the University of Cambridge

The Pitt Building, Trumpington Street, Cambridge CB2 1RP

40 West 20th Street, New York, NY 10011-4211, USA

10 Stamford Road, Oakleigh, Melbourne 3166, Australia

c Cambridge University Press 1988, 1992

Copyright

except for §13.10 and Appendix B, which are placed into the public domain,

and except for all other computer programs and procedures, which are

c Numerical Recipes Software 1987, 1988, 1992, 1997

Copyright

All Rights Reserved.

Some sections of this book were originally published, in different form, in Computers

c American Institute of Physics, 1988–1992.

in Physics magazine, Copyright

First Edition originally published 1988; Second Edition originally published 1992.

Reprinted with corrections, 1993, 1994, 1995, 1997.

This reprinting is corrected to software version 2.08

Printed in the United States of America

Typeset in TEX

Without an additional license to use the contained software, this book is intended as

a text and reference book, for reading purposes only. A free license for limited use of the

software by the individual owner of a copy of this book who personally types one or more

routines into a single computer is granted under terms described on p. xvii. See the section

“License Information” (pp. xvi–xviii) for information on obtaining more general licenses

at low cost.

Machine-readable media containing the software in this book, with included licenses

for use on a single screen, are available from Cambridge University Press. See the

order form at the back of the book, email to “orders@cup.org” (North America) or

“trade@cup.cam.ac.uk” (rest of world), or write to Cambridge University Press, 110

Midland Avenue, Port Chester, NY 10573 (USA), for further information.

The software may also be downloaded, with immediate purchase of a license

also possible, from the Numerical Recipes Software Web Site (http://www.nr.com).

Unlicensed transfer of Numerical Recipes programs to any other format, or to any computer

except one that is specifically licensed, is strictly prohibited. Technical questions,

corrections, and requests for information should be addressed to Numerical Recipes

Software, P.O. Box 243, Cambridge, MA 02238 (USA), email “info@nr.com”, or fax

781 863-1739.

Library of Congress Cataloging in Publication Data

Numerical recipes in C : the art of scientific computing / William H. Press

. . . [et al.]. – 2nd ed.

Includes bibliographical references (p.

) and index.

ISBN 0-521-43108-5

1. Numerical analysis–Computer programs. 2. Science–Mathematics–Computer programs.

3. C (Computer program language) I. Press, William H.

QA297.N866 1992

519.4028553–dc20

92-8876

A catalog record for this book is available from the British Library.

ISBN

ISBN

ISBN

ISBN

ISBN

0

0

0

0

0

521 43108

521 43720

521 43724

521 57608

521 57607

5

2

5

3

5

Book

Example book in C

C diskette (IBM 3.5 , 1.44M)

CDROM (IBM PC/Macintosh)

CDROM (UNIX)

Contents

Preface to the Second Edition

1

Preface to the First Edition

xiv

License Information

xvi

Computer Programs by Chapter and Section

xix

Preliminaries

1

1.0 Introduction

1.1 Program Organization and Control Structures

1.2 Some C Conventions for Scientific Computing

1.3 Error, Accuracy, and Stability

2

3

xi

1

5

15

28

Solution of Linear Algebraic Equations

32

2.0 Introduction

2.1 Gauss-Jordan Elimination

2.2 Gaussian Elimination with Backsubstitution

2.3 LU Decomposition and Its Applications

2.4 Tridiagonal and Band Diagonal Systems of Equations

2.5 Iterative Improvement of a Solution to Linear Equations

2.6 Singular Value Decomposition

2.7 Sparse Linear Systems

2.8 Vandermonde Matrices and Toeplitz Matrices

2.9 Cholesky Decomposition

2.10 QR Decomposition

2.11 Is Matrix Inversion an N 3 Process?

32

36

41

43

50

55

59

71

90

96

98

102

Interpolation and Extrapolation

3.0 Introduction

3.1 Polynomial Interpolation and Extrapolation

3.2 Rational Function Interpolation and Extrapolation

3.3 Cubic Spline Interpolation

3.4 How to Search an Ordered Table

3.5 Coefficients of the Interpolating Polynomial

3.6 Interpolation in Two or More Dimensions

v

105

105

108

111

113

117

120

123

vi

4

Contents

Integration of Functions

4.0 Introduction

4.1 Classical Formulas for Equally Spaced Abscissas

4.2 Elementary Algorithms

4.3 Romberg Integration

4.4 Improper Integrals

4.5 Gaussian Quadratures and Orthogonal Polynomials

4.6 Multidimensional Integrals

5

Evaluation of Functions

5.0 Introduction

5.1 Series and Their Convergence

5.2 Evaluation of Continued Fractions

5.3 Polynomials and Rational Functions

5.4 Complex Arithmetic

5.5 Recurrence Relations and Clenshaw’s Recurrence Formula

5.6 Quadratic and Cubic Equations

5.7 Numerical Derivatives

5.8 Chebyshev Approximation

5.9 Derivatives or Integrals of a Chebyshev-approximated Function

5.10 Polynomial Approximation from Chebyshev Coefficients

5.11 Economization of Power Series

5.12 Pad?e Approximants

5.13 Rational Chebyshev Approximation

5.14 Evaluation of Functions by Path Integration

6

Special Functions

6.0 Introduction

6.1 Gamma Function, Beta Function, Factorials, Binomial Coefficients

6.2 Incomplete Gamma Function, Error Function, Chi-Square

Probability Function, Cumulative Poisson Function

6.3 Exponential Integrals

6.4 Incomplete Beta Function, Student’s Distribution, F-Distribution,

Cumulative Binomial Distribution

6.5 Bessel Functions of Integer Order

6.6 Modified Bessel Functions of Integer Order

6.7 Bessel Functions of Fractional Order, Airy Functions, Spherical

Bessel Functions

6.8 Spherical Harmonics

6.9 Fresnel Integrals, Cosine and Sine Integrals

6.10 Dawson’s Integral

6.11 Elliptic Integrals and Jacobian Elliptic Functions

6.12 Hypergeometric Functions

7

Random Numbers

7.0 Introduction

7.1 Uniform Deviates

129

129

130

136

140

141

147

161

165

165

165

169

173

176

178

183

186

190

195

197

198

200

204

208

212

212

213

216

222

226

230

236

240

252

255

259

261

271

274

274

275

Contents

7.2 Transformation Method: Exponential and Normal Deviates

7.3 Rejection Method: Gamma, Poisson, Binomial Deviates

7.4 Generation of Random Bits

7.5 Random Sequences Based on Data Encryption

7.6 Simple Monte Carlo Integration

7.7 Quasi- (that is, Sub-) Random Sequences

7.8 Adaptive and Recursive Monte Carlo Methods

8

Sorting

8.0 Introduction

8.1 Straight Insertion and Shell’s Method

8.2 Quicksort

8.3 Heapsort

8.4 Indexing and Ranking

8.5 Selecting the M th Largest

8.6 Determination of Equivalence Classes

9

Root Finding and Nonlinear Sets of Equations

9.0 Introduction

9.1 Bracketing and Bisection

9.2 Secant Method, False Position Method, and Ridders’ Method

9.3 Van Wijngaarden–Dekker–Brent Method

9.4 Newton-Raphson Method Using Derivative

9.5 Roots of Polynomials

9.6 Newton-Raphson Method for Nonlinear Systems of Equations

9.7 Globally Convergent Methods for Nonlinear Systems of Equations

10 Minimization or Maximization of Functions

10.0 Introduction

10.1 Golden Section Search in One Dimension

10.2 Parabolic Interpolation and Brent’s Method in One Dimension

10.3 One-Dimensional Search with First Derivatives

10.4 Downhill Simplex Method in Multidimensions

10.5 Direction Set (Powell’s) Methods in Multidimensions

10.6 Conjugate Gradient Methods in Multidimensions

10.7 Variable Metric Methods in Multidimensions

10.8 Linear Programming and the Simplex Method

10.9 Simulated Annealing Methods

11 Eigensystems

11.0 Introduction

11.1 Jacobi Transformations of a Symmetric Matrix

11.2 Reduction of a Symmetric Matrix to Tridiagonal Form:

Givens and Householder Reductions

11.3 Eigenvalues and Eigenvectors of a Tridiagonal Matrix

11.4 Hermitian Matrices

11.5 Reduction of a General Matrix to Hessenberg Form

vii

287

290

296

300

304

309

316

329

329

330

332

336

338

341

345

347

347

350

354

359

362

369

379

383

394

394

397

402

405

408

412

420

425

430

444

456

456

463

469

475

481

482

viii

Contents

11.6 The QR Algorithm for Real Hessenberg Matrices

11.7 Improving Eigenvalues and/or Finding Eigenvectors by

Inverse Iteration

12 Fast Fourier Transform

12.0 Introduction

12.1 Fourier Transform of Discretely Sampled Data

12.2 Fast Fourier Transform (FFT)

12.3 FFT of Real Functions, Sine and Cosine Transforms

12.4 FFT in Two or More Dimensions

12.5 Fourier Transforms of Real Data in Two and Three Dimensions

12.6 External Storage or Memory-Local FFTs

13 Fourier and Spectral Applications

13.0 Introduction

13.1 Convolution and Deconvolution Using the FFT

13.2 Correlation and Autocorrelation Using the FFT

13.3 Optimal (Wiener) Filtering with the FFT

13.4 Power Spectrum Estimation Using the FFT

13.5 Digital Filtering in the Time Domain

13.6 Linear Prediction and Linear Predictive Coding

13.7 Power Spectrum Estimation by the Maximum Entropy

(All Poles) Method

13.8 Spectral Analysis of Unevenly Sampled Data

13.9 Computing Fourier Integrals Using the FFT

13.10 Wavelet Transforms

13.11 Numerical Use of the Sampling Theorem

14 Statistical Description of Data

14.0 Introduction

14.1 Moments of a Distribution: Mean, Variance, Skewness,

and So Forth

14.2 Do Two Distributions Have the Same Means or Variances?

14.3 Are Two Distributions Different?

14.4 Contingency Table Analysis of Two Distributions

14.5 Linear Correlation

14.6 Nonparametric or Rank Correlation

14.7 Do Two-Dimensional Distributions Differ?

14.8 Savitzky-Golay Smoothing Filters

15 Modeling of Data

15.0 Introduction

15.1 Least Squares as a Maximum Likelihood Estimator

15.2 Fitting Data to a Straight Line

15.3 Straight-Line Data with Errors in Both Coordinates

15.4 General Linear Least Squares

15.5 Nonlinear Models

486

493

496

496

500

504

510

521

525

532

537

537

538

545

547

549

558

564

572

575

584

591

606

609

609

610

615

620

628

636

639

645

650

656

656

657

661

666

671

681

Contents

15.6 Confidence Limits on Estimated Model Parameters

15.7 Robust Estimation

16 Integration of Ordinary Differential Equations

16.0 Introduction

16.1 Runge-Kutta Method

16.2 Adaptive Stepsize Control for Runge-Kutta

16.3 Modified Midpoint Method

16.4 Richardson Extrapolation and the Bulirsch-Stoer Method

16.5 Second-Order Conservative Equations

16.6 Stiff Sets of Equations

16.7 Multistep, Multivalue, and Predictor-Corrector Methods

17 Two Point Boundary Value Problems

17.0 Introduction

17.1 The Shooting Method

17.2 Shooting to a Fitting Point

17.3 Relaxation Methods

17.4 A Worked Example: Spheroidal Harmonics

17.5 Automated Allocation of Mesh Points

17.6 Handling Internal Boundary Conditions or Singular Points

18 Integral Equations and Inverse Theory

18.0 Introduction

18.1 Fredholm Equations of the Second Kind

18.2 Volterra Equations

18.3 Integral Equations with Singular Kernels

18.4 Inverse Problems and the Use of A Priori Information

18.5 Linear Regularization Methods

18.6 Backus-Gilbert Method

18.7 Maximum Entropy Image Restoration

19 Partial Differential Equations

19.0 Introduction

19.1 Flux-Conservative Initial Value Problems

19.2 Diffusive Initial Value Problems

19.3 Initial Value Problems in Multidimensions

19.4 Fourier and Cyclic Reduction Methods for Boundary

Value Problems

19.5 Relaxation Methods for Boundary Value Problems

19.6 Multigrid Methods for Boundary Value Problems

20 Less-Numerical Algorithms

20.0 Introduction

20.1 Diagnosing Machine Parameters

20.2 Gray Codes

ix

689

699

707

707

710

714

722

724

732

734

747

753

753

757

760

762

772

783

784

788

788

791

794

797

804

808

815

818

827

827

834

847

853

857

863

871

889

889

889

894

x

Contents

20.3 Cyclic Redundancy and Other Checksums

20.4 Huffman Coding and Compression of Data

20.5 Arithmetic Coding

20.6 Arithmetic at Arbitrary Precision

896

903

910

915

References

926

Appendix A: Table of Prototype Declarations

930

Appendix B: Utility Routines

940

Appendix C: Complex Arithmetic

948

Index of Programs and Dependencies

951

General Index

965

Preface to the Second Edition

Our aim in writing the original edition of Numerical Recipes was to provide a

book that combined general discussion, analytical mathematics, algorithmics, and

actual working programs. The success of the first edition puts us now in a difficult,

though hardly unenviable, position. We wanted, then and now, to write a book

that is informal, fearlessly editorial, unesoteric, and above all useful. There is a

danger that, if we are not careful, we might produce a second edition that is weighty,

balanced, scholarly, and boring.

It is a mixed blessing that we know more now than we did six years ago. Then,

we were making educated guesses, based on existing literature and our own research,

about which numerical techniques were the most important and robust. Now, we have

the benefit of direct feedback from a large reader community. Letters to our alter-ego

enterprise, Numerical Recipes Software, are in the thousands per year. (Please, don’t

telephone us.) Our post office box has become a magnet for letters pointing out

that we have omitted some particular technique, well known to be important in a

particular field of science or engineering. We value such letters, and digest them

carefully, especially when they point us to specific references in the literature.

The inevitable result of this input is that this Second Edition of Numerical

Recipes is substantially larger than its predecessor, in fact about 50% larger both in

words and number of included programs (the latter now numbering well over 300).

“Don’t let the book grow in size,” is the advice that we received from several wise

colleagues. We have tried to follow the intended spirit of that advice, even as we

violate the letter of it. We have not lengthened, or increased in difficulty, the book’s

principal discussions of mainstream topics. Many new topics are presented at this

same accessible level. Some topics, both from the earlier edition and new to this

one, are now set in smaller type that labels them as being “advanced.” The reader

who ignores such advanced sections completely will not, we think, find any lack of

continuity in the shorter volume that results.

Here are some highlights of the new material in this Second Edition:

• a new chapter on integral equations and inverse methods

• a detailed treatment of multigrid methods for solving elliptic partial

differential equations

• routines for band diagonal linear systems

• improved routines for linear algebra on sparse matrices

• Cholesky and QR decomposition

• orthogonal polynomials and Gaussian quadratures for arbitrary weight

functions

• methods for calculating numerical derivatives

• Pad?e approximants, and rational Chebyshev approximation

• Bessel functions, and modified Bessel functions, of fractional order; and

several other new special functions

• improved random number routines

• quasi-random sequences

• routines for adaptive and recursive Monte Carlo integration in highdimensional spaces

• globally convergent methods for sets of nonlinear equations

xi

xii

Preface to the Second Edition

•

•

•

•

•

•

•

•

•

•

•

•

•

•

simulated annealing minimization for continuous control spaces

fast Fourier transform (FFT) for real data in two and three dimensions

fast Fourier transform (FFT) using external storage

improved fast cosine transform routines

wavelet transforms

Fourier integrals with upper and lower limits

spectral analysis on unevenly sampled data

Savitzky-Golay smoothing filters

fitting straight line data with errors in both coordinates

a two-dimensional Kolmogorov-Smirnoff test

the statistical bootstrap method

embedded Runge-Kutta-Fehlberg methods for differential equations

high-order methods for stiff differential equations

a new chapter on “less-numerical” algorithms, including Huffman and

arithmetic coding, arbitrary precision arithmetic, and several other topics.

Consult the Preface to the First Edition, following, or the Table of Contents, for a

list of the more “basic” subjects treated.

Acknowledgments

It is not possible for us to list by name here all the readers who have made

useful suggestions; we are grateful for these. In the text, we attempt to give specific

attribution for ideas that appear to be original, and not known in the literature. We

apologize in advance for any omissions.

Some readers and colleagues have been particularly generous in providing

us with ideas, comments, suggestions, and programs for this Second Edition.

We especially want to thank George Rybicki, Philip Pinto, Peter Lepage, Robert

Lupton, Douglas Eardley, Ramesh Narayan, David Spergel, Alan Oppenheim, Sallie

Baliunas, Scott Tremaine, Glennys Farrar, Steven Block, John Peacock, Thomas

Loredo, Matthew Choptuik, Gregory Cook, L. Samuel Finn, P. Deuflhard, Harold

Lewis, Peter Weinberger, David Syer, Richard Ferch, Steven Ebstein, Bradley

Keister, and William Gould. We have been helped by Nancy Lee Snyder’s mastery

of a complicated TEX manuscript. We express appreciation to our editors Lauren

Cowles and Alan Harvey at Cambridge University Press, and to our production

editor Russell Hahn. We remain, of course, grateful to the individuals acknowledged

in the Preface to the First Edition.

Special acknowledgment is due to programming consultant Seth Finkelstein,

who wrote, rewrote, or influenced many of the routines in this book, as well as in

its FORTRAN-language twin and the companion Example books. Our project has

benefited enormously from Seth’s talent for detecting, and following the trail of, even

very slight anomalies (often compiler bugs, but occasionally our errors), and from

his good programming sense. To the extent that this edition of Numerical Recipes

in C has a more graceful and “C-like” programming style than its predecessor, most

of the credit goes to Seth. (Of course, we accept the blame for the FORTRANish

lapses that still remain.)

We prepared this book for publication on DEC and Sun workstations running the UNIX operating system, and on a 486/33 PC compatible running

MS-DOS 5.0/Windows 3.0. (See §1.0 for a list of additional computers used in

xiii

Preface to the Second Edition

program tests.) We enthusiastically recommend the principal software used: GNU

Emacs, TEX, Perl, Adobe Illustrator, and PostScript. Also used were a variety of C

compilers – too numerous (and sometimes too buggy) for individual acknowledgment. It is a sobering fact that our standard test suite (exercising all the routines

in this book) has uncovered compiler bugs in many of the compilers tried. When

possible, we work with developers to see that such bugs get fixed; we encourage

interested compiler developers to contact us about such arrangements.

WHP and SAT acknowledge the continued support of the U.S. National Science

Foundation for their research on computational methods. D.A.R.P.A. support is

acknowledged for §13.10 on wavelets.

June, 1992

William H. Press

Saul A. Teukolsky

William T. Vetterling

Brian P. Flannery

Preface to the First Edition

We call this book Numerical Recipes for several reasons. In one sense, this book

is indeed a “cookbook” on numerical computation. However there is an important

distinction between a cookbook and a restaurant menu. The latter presents choices

among complete dishes in each of which the individual flavors are blended and

disguised. The former — and this book — reveals the individual ingredients and

explains how they are prepared and combined.

Another purpose of the title is to connote an eclectic mixture of presentational

techniques. This book is unique, we think, in offering, for each topic considered,

a certain amount of general discussion, a certain amount of analytical mathematics,

a certain amount of discussion of algorithmics, and (most important) actual implementations of these ideas in the form of working computer routines. Our task has

been to find the right balance among these ingredients for each topic. You will

find that for some topics we have tilted quite far to the analytic side; this where we

have felt there to be gaps in the “standard” mathematical training. For other topics,

where the mathematical prerequisites are universally held, we have tilted towards

more in-depth discussion of the nature of the computational algorithms, or towards

practical questions of implementation.

We admit, therefore, to some unevenness in the “level” of this book. About half

of it is suitable for an advanced undergraduate course on numerical computation for

science or engineering majors. The other half ranges from the level of a graduate

course to that of a professional reference. Most cookbooks have, after all, recipes at

varying levels of complexity. An attractive feature of this approach, we think, is that

the reader can use the book at increasing levels of sophistication as his/her experience

grows. Even inexperienced readers should be able to use our most advanced routines

as black boxes. Having done so, we hope that these readers will subsequently go

back and learn what secrets are inside.

If there is a single dominant theme in this book, it is that practical methods

of numerical computation can be simultaneously efficient, clever, and — important

— clear. The alternative viewpoint, that efficient computational methods must

necessarily be so arcane and complex as to be useful only in “black box” form,

we firmly reject.

Our purpose in this book is thus to open up a large number of computational

black boxes to your scrutiny. We want to teach you to take apart these black boxes

and to put them back together again, modifying them to suit your specific needs.

We assume that you are mathematically literate, i.e., that you have the normal

mathematical preparation associated with an undergraduate degree in a physical

science, or engineering, or economics, or a quantitative social science. We assume

that you know how to program a computer. We do not assume that you have any

prior formal knowledge of numerical analysis or numerical methods.

The scope of Numerical Recipes is supposed to be “everything up to, but

not including, partial differential equations.” We honor this in the breach: First,

we do have one introductory chapter on methods for partial differential equations

(Chapter 19). Second, we obviously cannot include everything else. All the so-called

“standard” topics of a numerical analysis course have been included in this book:

xiv

xv

Preface to the First Edition

linear equations (Chapter 2), interpolation and extrapolation (Chaper 3), integration

(Chaper 4), nonlinear root-finding (Chapter 9), eigensystems (Chapter 11), and

ordinary differential equations (Chapter 16). Most of these topics have been taken

beyond their standard treatments into some advanced material which we have felt

to be particularly important or useful.

Some other subjects that we cover in detail are not usually found in the standard

numerical analysis texts. These include the evaluation of functions and of particular

special functions of higher mathematics (Chapters 5 and 6); random numbers and

Monte Carlo methods (Chapter 7); sorting (Chapter 8); optimization, including

multidimensional methods (Chapter 10); Fourier transform methods, including FFT

methods and other spectral methods (Chapters 12 and 13); two chapters on the

statistical description and modeling of data (Chapters 14 and 15); and two-point

boundary value problems, both shooting and relaxation methods (Chapter 17).

The programs in this book are included in ANSI-standard C. Versions of the

book in FORTRAN, Pascal, and BASIC are available separately. We have more

to say about the C language, and the computational environment assumed by our

routines, in §1.1 (Introduction).

Acknowledgments

Many colleagues have been generous in giving us the benefit of their numerical

and computational experience, in providing us with programs, in commenting on

the manuscript, or in general encouragement. We particularly wish to thank George

Rybicki, Douglas Eardley, Philip Marcus, Stuart Shapiro, Paul Horowitz, Bruce

Musicus, Irwin Shapiro, Stephen Wolfram, Henry Abarbanel, Larry Smarr, Richard

Muller, John Bahcall, and A.G.W. Cameron.

We also wish to acknowledge two individuals whom we have never met:

Forman Acton, whose 1970 textbook Numerical Methods that Work (New York:

Harper and Row) has surely left its stylistic mark on us; and Donald Knuth, both for

his series of books on The Art of Computer Programming (Reading, MA: AddisonWesley), and for TEX, the computer typesetting language which immensely aided

production of this book.

Research by the authors on computational methods was supported in part by

the U.S. National Science Foundation.

October, 1985

William H. Press

Brian P. Flannery

Saul A. Teukolsky

William T. Vetterling

License Information

Read this section if you want to use the programs in this book on a computer.

You’ll need to read the following Disclaimer of Warranty, get the programs onto your

computer, and acquire a Numerical Recipes software license. (Without this license,

which can be the free “immediate license” under terms described below, the book is

intended as a text and reference book, for reading purposes only.)

Disclaimer of Warranty

We make no warranties, express or implied, that the programs contained

in this volume are free of error, or are consistent with any particular standard

of merchantability, or that they will meet your requirements for any particular

application. They should not be relied on for solving a problem whose incorrect

solution could result in injury to a person or loss of property. If you do use the

programs in such a manner, it is at your own risk. The authors and publisher

disclaim all liability for direct or consequential damages resulting from your

use of the programs.

How to Get the Code onto Your Computer

Pick one of the following methods:

• You can type the programs from this book directly into your computer. In this

case, the only kind of license available to you is the free “immediate license”

(see below). You are not authorized to transfer or distribute a machine-readable

copy to any other person, nor to have any other person type the programs into a

computer on your behalf. We do not want to hear bug reports from you if you

choose this option, because experience has shown that virtually all reported bugs

in such cases are typing errors!

• You can download the Numerical Recipes programs electronically from the

Numerical Recipes On-Line Software Store, located at http://www.nr.com, our

Web site. They are packaged as a password-protected file, and you’ll need to

purchase a license to unpack them. You can get a single-screen license and

password immediately, on-line, from the On-Line Store, with fees ranging from

$50 (PC, Macintosh, educational institutions’ UNIX) to $140 (general UNIX).

Downloading the packaged software from the On-Line Store is also the way to

start if you want to acquire a more general (multiscreen, site, or corporate) license.

• You can purchase media containing the programs from Cambridge University

Press. Diskette versions are available in IBM-compatible format for machines

running Windows 3.1, 95, or NT. CDROM versions in ISO-9660 format for PC,

Macintosh, and UNIX systems are also available; these include both C and Fortran

versions on a single CDROM (as well as versions in Pascal and BASIC from the

first edition). Diskettes purchased from Cambridge University Press include a

single-screen license for PC or Macintosh only. The CDROM is available with

a single-screen license for PC or Macintosh (order ISBN 0 521 576083), or (at a

slightly higher price) with a single-screen license for UNIX workstations (order

ISBN 0 521 576075). Orders for media from Cambridge University Press can

be placed at 800 872-7423 (North America only) or by email to orders@cup.org

(North America) or trade@cup.cam.ac.uk (rest of world). Or, visit the Web sites

http://www.cup.org (North America) or http://www.cup.cam.ac.uk (rest

of world).

xvi

License Information

xvii

Types of License Offered

Here are the types of licenses that we offer. Note that some types are

automatically acquired with the purchase of media from Cambridge University

Press, or of an unlocking password from the Numerical Recipes On-Line Software

Store, while other types of licenses require that you communicate specifically with

Numerical Recipes Software (email: orders@nr.com or fax: 781 863-1739). Our

Web site http://www.nr.com has additional information.

• [“Immediate License”] If you are the individual owner of a copy of this book and

you type one or more of its routines into your computer, we authorize you to use

them on that computer for your own personal and noncommercial purposes. You

are not authorized to transfer or distribute machine-readable copies to any other

person, or to use the routines on more than one machine, or to distribute executable

programs containing our routines. This is the only free license.

• [“Single-Screen License”] This is the most common type of low-cost license, with

terms governed by our Single Screen (Shrinkwrap) License document (complete

terms available through our Web site). Basically, this license lets you use Numerical

Recipes routines on any one screen (PC, workstation, X-terminal, etc.). You may

also, under this license, transfer pre-compiled, executable programs incorporating

our routines to other, unlicensed, screens or computers, providing that (i) your

application is noncommercial (i.e., does not involve the selling of your program

for a fee), (ii) the programs were first developed, compiled, and successfully run

on a licensed screen, and (iii) our routines are bound into the programs in such a

manner that they cannot be accessed as individual routines and cannot practicably

be unbound and used in other programs. That is, under this license, your program

user must not be able to use our programs as part of a program library or “mix-andmatch” workbench. Conditions for other types of commercial or noncommercial

distribution may be found on our Web site (http://www.nr.com).

• [“Multi-Screen, Server, Site, and Corporate Licenses”] The terms of the Single

Screen License can be extended to designated groups of machines, defined by

number of screens, number of machines, locations, or ownership. Significant

discounts from the corresponding single-screen prices are available when the

estimated number of screens exceeds 40. Contact Numerical Recipes Software

(email: orders@nr.com or fax: 781 863-1739) for details.

• [“Course Right-to-Copy License”] Instructors at accredited educational institutions

who have adopted this book for a course, and who have already purchased a Single

Screen License (either acquired with the purchase of media, or from the Numerical

Recipes On-Line Software Store), may license the programs for use in that course

as follows: Mail your name, title, and address; the course name, number, dates,

and estimated enrollment; and advance payment of $5 per (estimated) student

to Numerical Recipes Software, at this address: P.O. Box 243, Cambridge, MA

02238 (USA). You will receive by return mail a license authorizing you to make

copies of the programs for use by your students, and/or to transfer the programs to

a machine accessible to your students (but only for the duration of the course).

About Copyrights on Computer Programs

Like artistic or literary compositions, computer programs are protected by

copyright. Generally it is an infringement for you to copy into your computer a

program from a copyrighted source. (It is also not a friendly thing to do, since it

deprives the program’s author of compensation for his or her creative effort.) Under

xviii

License Information

copyright law, all “derivative works” (modified versions, or translations into another

computer language) also come under the same copyright as the original work.

Copyright does not protect ideas, but only the expression of those ideas in

a particular form. In the case of a computer program, the ideas consist of the

program’s methodology and algorithm, including the necessary sequence of steps

adopted by the programmer. The expression of those ideas is the program source

code (particularly any arbitrary or stylistic choices embodied in it), its derived object

code, and any other derivative works.

If you analyze the ideas contained in a program, and then express those

ideas in your own completely different implementation, then that new program

implementation belongs to you. That is what we have done for those programs in

this book that are not entirely of our own devising. When programs in this book are

said to be “based” on programs published in copyright sources, we mean that the

ideas are the same. The expression of these ideas as source code is our own. We

believe that no material in this book infringes on an existing copyright.

Trademarks

Several registered trademarks appear within the text of this book: Sun is a

trademark of Sun Microsystems, Inc. SPARC and SPARCstation are trademarks of

SPARC International, Inc. Microsoft, Windows 95, Windows NT, PowerStation,

and MS are trademarks of Microsoft Corporation. DEC, VMS, Alpha AXP, and

ULTRIX are trademarks of Digital Equipment Corporation. IBM is a trademark of

International Business Machines Corporation. Apple and Macintosh are trademarks

of Apple Computer, Inc. UNIX is a trademark licensed exclusively through X/Open

Co. Ltd. IMSL is a trademark of Visual Numerics, Inc. NAG refers to proprietary

computer software of Numerical Algorithms Group (USA) Inc. PostScript and

Adobe Illustrator are trademarks of Adobe Systems Incorporated. Last, and no doubt

least, Numerical Recipes (when identifying products) is a trademark of Numerical

Recipes Software.

Attributions

The fact that ideas are legally “free as air” in no way supersedes the ethical

requirement that ideas be credited to their known originators. When programs in

this book are based on known sources, whether copyrighted or in the public domain,

published or “handed-down,” we have attempted to give proper attribution. Unfortunately, the lineage of many programs in common circulation is often unclear. We

would be grateful to readers for new or corrected information regarding attributions,

which we will attempt to incorporate in subsequent printings.

Computer Programs

by Chapter and Section

1.0

1.1

1.1

1.1

flmoon

julday

badluk

caldat

calculate phases of the moon by date

Julian Day number from calendar date

Friday the 13th when the moon is full

calendar date from Julian day number

2.1

gaussj

2.3

2.3

2.4

2.4

2.4

2.4

2.5

2.6

2.6

2.6

2.7

2.7

2.7

2.7

2.7

2.7

2.7

2.7

2.7

2.7

2.7

2.8

2.8

2.9

2.9

2.10

2.10

2.10

2.10

2.10

ludcmp

lubksb

tridag

banmul

bandec

banbks

mprove

svbksb

svdcmp

pythag

cyclic

sprsin

sprsax

sprstx

sprstp

sprspm

sprstm

linbcg

snrm

atimes

asolve

vander

toeplz

choldc

cholsl

qrdcmp

qrsolv

rsolv

qrupdt

rotate

Gauss-Jordan matrix inversion and linear equation

solution

linear equation solution, LU decomposition

linear equation solution, backsubstitution

solution of tridiagonal systems

multiply vector by band diagonal matrix

band diagonal systems, decomposition

band diagonal systems, backsubstitution

linear equation solution, iterative improvement

singular value backsubstitution

singular value decomposition of a matrix

calculate (a2 + b2 )1/2 without overflow

solution of cyclic tridiagonal systems

convert matrix to sparse format

product of sparse matrix and vector

product of transpose sparse matrix and vector

transpose of sparse matrix

pattern multiply two sparse matrices

threshold multiply two sparse matrices

biconjugate gradient solution of sparse systems

used by linbcg for vector norm

used by linbcg for sparse multiplication

used by linbcg for preconditioner

solve Vandermonde systems

solve Toeplitz systems

Cholesky decomposition

Cholesky backsubstitution

QR decomposition

QR backsubstitution

right triangular backsubstitution

update a QR decomposition

Jacobi rotation used by qrupdt

3.1

3.2

3.3

3.3

3.4

polint

ratint

spline

splint

locate

polynomial interpolation

rational function interpolation

construct a cubic spline

cubic spline interpolation

search an ordered table by bisection

xix

xx

Computer Programs by Chapter and Section

3.4

3.5

3.5

3.6

3.6

3.6

3.6

3.6

hunt

polcoe

polcof

polin2

bcucof

bcuint

splie2

splin2

search a table when calls are correlated

polynomial coefficients from table of values

polynomial coefficients from table of values

two-dimensional polynomial interpolation

construct two-dimensional bicubic

two-dimensional bicubic interpolation

construct two-dimensional spline

two-dimensional spline interpolation

4.2

4.2

4.2

4.3

4.4

4.4

4.4

4.4

4.4

4.4

4.5

4.5

4.5

4.5

4.5

4.5

4.5

4.6

trapzd

qtrap

qsimp

qromb

midpnt

qromo

midinf

midsql

midsqu

midexp

qgaus

gauleg

gaulag

gauher

gaujac

gaucof

orthog

quad3d

trapezoidal rule

integrate using trapezoidal rule

integrate using Simpson’s rule

integrate using Romberg adaptive method

extended midpoint rule

integrate using open Romberg adaptive method

integrate a function on a semi-infinite interval

integrate a function with lower square-root singularity

integrate a function with upper square-root singularity

integrate a function that decreases exponentially

integrate a function by Gaussian quadratures

Gauss-Legendre weights and abscissas

Gauss-Laguerre weights and abscissas

Gauss-Hermite weights and abscissas

Gauss-Jacobi weights and abscissas

quadrature weights from orthogonal polynomials

construct nonclassical orthogonal polynomials

integrate a function over a three-dimensional space

5.1

5.3

5.3

5.3

5.7

5.8

5.8

5.9

5.9

5.10

5.10

5.11

5.12

5.13

eulsum

ddpoly

poldiv

ratval

dfridr

chebft

chebev

chder

chint

chebpc

pcshft

pccheb

pade

ratlsq

sum a series by Euler–van Wijngaarden algorithm

evaluate a polynomial and its derivatives

divide one polynomial by another

evaluate a rational function

numerical derivative by Ridders’ method

fit a Chebyshev polynomial to a function

Chebyshev polynomial evaluation

derivative of a function already Chebyshev fitted

integrate a function already Chebyshev fitted

polynomial coefficients from a Chebyshev fit

polynomial coefficients of a shifted polynomial

inverse of chebpc; use to economize power series

Pad?e approximant from power series coefficients

rational fit by least-squares method

6.1

6.1

6.1

6.1

gammln

factrl

bico

factln

logarithm of gamma function

factorial function

binomial coefficients function

logarithm of factorial function

Computer Programs by Chapter and Section

xxi

6.1

6.2

6.2

6.2

6.2

6.2

6.2

6.2

6.3

6.3

6.4

6.4

6.5

6.5

6.5

6.5

6.5

6.5

6.6

6.6

6.6

6.6

6.6

6.6

6.7

6.7

6.7

6.7

6.7

6.8

6.9

6.9

6.10

6.11

6.11

6.11

6.11

6.11

6.11

6.11

6.11

6.12

6.12

6.12

beta

gammp

gammq

gser

gcf

erff

erffc

erfcc

expint

ei

betai

betacf

bessj0

bessy0

bessj1

bessy1

bessy

bessj

bessi0

bessk0

bessi1

bessk1

bessk

bessi

bessjy

beschb

bessik

airy

sphbes

plgndr

frenel

cisi

dawson

rf

rd

rj

rc

ellf

elle

ellpi

sncndn

hypgeo

hypser

hypdrv

beta function

incomplete gamma function

complement of incomplete gamma function

series used by gammp and gammq

continued fraction used by gammp and gammq

error function

complementary error function

complementary error function, concise routine

exponential integral En

exponential integral Ei

incomplete beta function

continued fraction used by betai

Bessel function J0

Bessel function Y0

Bessel function J1

Bessel function Y1

Bessel function Y of general integer order

Bessel function J of general integer order

modified Bessel function I0

modified Bessel function K0

modified Bessel function I1

modified Bessel function K1

modified Bessel function K of integer order

modified Bessel function I of integer order

Bessel functions of fractional order

Chebyshev expansion used by bessjy

modified Bessel functions of fractional order

Airy functions

spherical Bessel functions jn and yn

Legendre polynomials, associated (spherical harmonics)

Fresnel integrals S(x) and C(x)

cosine and sine integrals Ci and Si

Dawson’s integral

Carlson’s elliptic integral of the first kind

Carlson’s elliptic integral of the second kind

Carlson’s elliptic integral of the third kind

Carlson’s degenerate elliptic integral

Legendre elliptic integral of the first kind

Legendre elliptic integral of the second kind

Legendre elliptic integral of the third kind

Jacobian elliptic functions

complex hypergeometric function

complex hypergeometric function, series evaluation

complex hypergeometric function, derivative of

7.1

7.1

ran0

ran1

random deviate by Park and Miller minimal standard

random deviate, minimal standard plus shuffle

xxii

Computer Programs by Chapter and Section

7.1

7.1

7.2

7.2

7.3

7.3

7.3

7.4

7.4

7.5

7.5

7.7

7.8

7.8

7.8

7.8

ran2

ran3

expdev

gasdev

gamdev

poidev

bnldev

irbit1

irbit2

psdes

ran4

sobseq

vegas

rebin

miser

ranpt

random deviate by L’Ecuyer long period plus shuffle

random deviate by Knuth subtractive method

exponential random deviates

normally distributed random deviates

gamma-law distribution random deviates

Poisson distributed random deviates

binomial distributed random deviates

random bit sequence

random bit sequence

“pseudo-DES” hashing of 64 bits

random deviates from DES-like hashing

Sobol’s quasi-random sequence

adaptive multidimensional Monte Carlo integration

sample rebinning used by vegas

recursive multidimensional Monte Carlo integration

get random point, used by miser

8.1

8.1

8.1

8.2

8.2

8.3

8.4

8.4

8.4

8.5

8.5

8.5

8.6

8.6

piksrt

piksr2

shell

sort

sort2

hpsort

indexx

sort3

rank

select

selip

hpsel

eclass

eclazz

sort an array by straight insertion

sort two arrays by straight insertion

sort an array by Shell’s method

sort an array by quicksort method

sort two arrays by quicksort method

sort an array by heapsort method

construct an index for an array

sort, use an index to sort 3 or more arrays

construct a rank table for an array

find the N th largest in an array

find the N th largest, without altering an array

find M largest values, without altering an array

determine equivalence classes from list

determine equivalence classes from procedure

9.0

9.1

9.1

9.1

9.2

9.2

9.2

9.3

9.4

9.4

9.5

9.5

scrsho

zbrac

zbrak

rtbis

rtflsp

rtsec

zriddr

zbrent

rtnewt

rtsafe

laguer

zroots

9.5

9.5

zrhqr

qroot

graph a function to search for roots

outward search for brackets on roots

inward search for brackets on roots

find root of a function by bisection

find root of a function by false-position

find root of a function by secant method

find root of a function by Ridders’ method

find root of a function by Brent’s method

find root of a function by Newton-Raphson

find root of a function by Newton-Raphson and bisection

find a root of a polynomial by Laguerre’s method

roots of a polynomial by Laguerre’s method with

deflation

roots of a polynomial by eigenvalue methods

complex or double root of a polynomial, Bairstow

Computer Programs by Chapter and Section

xxiii

9.6

9.7

9.7

9.7

9.7

9.7

mnewt

lnsrch

newt

fdjac

fmin

broydn

Newton’s method for systems of equations

search along a line, used by newt

globally convergent multi-dimensional Newton’s method

finite-difference Jacobian, used by newt

norm of a vector function, used by newt

secant method for systems of equations

10.1

10.1

10.2

10.3

10.4

10.4

10.5

10.5

10.5

10.6

10.6

10.6

10.7

10.8

10.8

10.8

10.8

10.9

10.9

10.9

10.9

10.9

10.9

10.9

10.9

mnbrak

golden

brent

dbrent

amoeba

amotry

powell

linmin

f1dim

frprmn

dlinmin

df1dim

dfpmin

simplx

simp1

simp2

simp3

anneal

revcst

reverse

trncst

trnspt

metrop

amebsa

amotsa

bracket the minimum of a function

find minimum of a function by golden section search

find minimum of a function by Brent’s method

find minimum of a function using derivative information

minimize in N -dimensions by downhill simplex method

evaluate a trial point, used by amoeba

minimize in N -dimensions by Powell’s method

minimum of a function along a ray in N -dimensions

function used by linmin

minimize in N -dimensions by conjugate gradient

minimum of a function along a ray using derivatives

function used by dlinmin

minimize in N -dimensions by variable metric method

linear programming maximization of a linear function

linear programming, used by simplx

linear programming, used by simplx

linear programming, used by simplx

traveling salesman problem by simulated annealing

cost of a reversal, used by anneal

do a reversal, used by anneal

cost of a transposition, used by anneal

do a transposition, used by anneal

Metropolis algorithm, used by anneal

simulated annealing in continuous spaces

evaluate a trial point, used by amebsa

11.1

11.1

11.2

11.3

11.5

11.5

11.6

jacobi

eigsrt

tred2

tqli

balanc

elmhes

hqr

eigenvalues and eigenvectors of a symmetric matrix

eigenvectors, sorts into order by eigenvalue

Householder reduction of a real, symmetric matrix

eigensolution of a symmetric tridiagonal matrix

balance a nonsymmetric matrix

reduce a general matrix to Hessenberg form

eigenvalues of a Hessenberg matrix

12.2

12.3

12.3

12.3

12.3

12.3

four1

twofft

realft

sinft

cosft1

cosft2

fast Fourier transform (FFT) in one dimension

fast Fourier transform of two real functions

fast Fourier transform of a single real function

fast sine transform

fast cosine transform with endpoints

“staggered” fast cosine transform

xxiv

Computer Programs by Chapter and Section

12.4

12.5

12.6

12.6

fourn

rlft3

fourfs

fourew

fast Fourier transform in multidimensions

FFT of real data in two or three dimensions

FFT for huge data sets on external media

rewind and permute files, used by fourfs

13.1

13.2

13.4

13.6

13.6

13.6

13.7

13.8

13.8

13.8

13.9

13.9

13.10

13.10

13.10

13.10

13.10

convlv

correl

spctrm

memcof

fixrts

predic

evlmem

period

fasper

spread

dftcor

dftint

wt1

daub4

pwtset

pwt

wtn

convolution or deconvolution of data using FFT

correlation or autocorrelation of data using FFT

power spectrum estimation using FFT

evaluate maximum entropy (MEM) coefficients

reflect roots of a polynomial into unit circle

linear prediction using MEM coefficients

power spectral estimation from MEM coefficients

power spectrum of unevenly sampled data

power spectrum of unevenly sampled larger data sets

extirpolate value into array, used by fasper

compute endpoint corrections for Fourier integrals

high-accuracy Fourier integrals

one-dimensional discrete wavelet transform

Daubechies 4-coefficient wavelet filter

initialize coefficients for pwt

partial wavelet transform

multidimensional discrete wavelet transform

14.1

14.2

14.2

14.2

14.2

14.2

14.3

14.3

14.3

14.3

14.3

14.4

14.4

14.5

14.6

14.6

14.6

14.6

14.7

14.7

14.7

14.7

14.8

moment

ttest

avevar

tutest

tptest

ftest

chsone

chstwo

ksone

kstwo

probks

cntab1

cntab2

pearsn

spear

crank

kendl1

kendl2

ks2d1s

quadct

quadvl

ks2d2s

savgol

calculate moments of a data set

Student’s t-test for difference of means

calculate mean and variance of a data set

Student’s t-test for means, case of unequal variances

Student’s t-test for means, case of paired data

F -test for difference of variances

chi-square test for difference between data and model

chi-square test for difference between two data sets

Kolmogorov-Smirnov test of data against model

Kolmogorov-Smirnov test between two data sets

Kolmogorov-Smirnov probability function

contingency table analysis using chi-square

contingency table analysis using entropy measure

Pearson’s correlation between two data sets

Spearman’s rank correlation between two data sets

replaces array elements by their rank

correlation between two data sets, Kendall’s tau

contingency table analysis using Kendall’s tau

K–S test in two dimensions, data vs. model

count points by quadrants, used by ks2d1s

quadrant probabilities, used by ks2d1s

K–S test in two dimensions, data vs. data

Savitzky-Golay smoothing coefficients

Computer Programs by Chapter and Section

xxv

15.2

15.3

15.3

15.4

15.4

15.4

15.4

15.4

15.4

15.5

15.5

15.5

15.7

15.7

fit

fitexy

chixy

lfit

covsrt

svdfit

svdvar

fpoly

fleg

mrqmin

mrqcof

fgauss

medfit

rofunc

least-squares fit data to a straight line

fit data to a straight line, errors in both x and y

used by fitexy to calculate a ?2

general linear least-squares fit by normal equations

rearrange covariance matrix, used by lfit

linear least-squares fit by singular value decomposition

variances from singular value decomposition

fit a polynomial using lfit or svdfit

fit a Legendre polynomial using lfit or svdfit

nonlinear least-squares fit, Marquardt’s method

used by mrqmin to evaluate coefficients

fit a sum of Gaussians using mrqmin

fit data to a straight line robustly, least absolute deviation

fit data robustly, used by medfit

16.1

16.1

16.2

16.2

16.2

16.3

16.4

16.4

16.4

16.5

16.6

16.6

16.6

16.6

16.6

rk4

rkdumb

rkqs

rkck

odeint

mmid

bsstep

pzextr

rzextr

stoerm

stiff

jacobn

derivs

simpr

stifbs

integrate one step of ODEs, fourth-order Runge-Kutta

integrate ODEs by fourth-order Runge-Kutta

integrate one step of ODEs with accuracy monitoring

Cash-Karp-Runge-Kutta step used by rkqs

integrate ODEs with accuracy monitoring

integrate ODEs by modified midpoint method

integrate ODEs, Bulirsch-Stoer step

polynomial extrapolation, used by bsstep

rational function extrapolation, used by bsstep

integrate conservative second-order ODEs

integrate stiff ODEs by fourth-order Rosenbrock

sample Jacobian routine for stiff

sample derivatives routine for stiff

integrate stiff ODEs by semi-implicit midpoint rule

integrate stiff ODEs, Bulirsch-Stoer step

17.1

17.2

17.3

17.3

17.3

17.3

17.4

17.4

17.4

17.4

shoot

shootf

solvde

bksub

pinvs

red

sfroid

difeq

sphoot

sphfpt

solve two point boundary value problem by shooting

ditto, by shooting to a fitting point

two point boundary value problem, solve by relaxation

backsubstitution, used by solvde

diagonalize a sub-block, used by solvde

reduce columns of a matrix, used by solvde

spheroidal functions by method of solvde

spheroidal matrix coefficients, used by sfroid

spheroidal functions by method of shoot

spheroidal functions by method of shootf

18.1

18.1

18.2

18.3

18.3

fred2

fredin

voltra

wwghts

kermom

solve linear Fredholm equations of the second kind

interpolate solutions obtained with fred2

linear Volterra equations of the second kind

quadrature weights for an arbitrarily singular kernel

sample routine for moments of a singular kernel

xxvi

Computer Programs by Chapter and Section

18.3

18.3

quadmx

fredex

sample routine for a quadrature matrix

example of solving a singular Fredholm equation

19.5

19.6

19.6

19.6

19.6

19.6

19.6

19.6

19.6

19.6

19.6

19.6

19.6

19.6

19.6

19.6

19.6

sor

mglin

rstrct

interp

addint

slvsml

relax

resid

copy

fill0

mgfas

relax2

slvsm2

lop

matadd

matsub

anorm2

elliptic PDE solved by successive overrelaxation method

linear elliptic PDE solved by multigrid method

half-weighting restriction, used by mglin, mgfas

bilinear prolongation, used by mglin, mgfas

interpolate and add, used by mglin

solve on coarsest grid, used by mglin

Gauss-Seidel relaxation, used by mglin

calculate residual, used by mglin

utility used by mglin, mgfas

utility used by mglin

nonlinear elliptic PDE solved by multigrid method

Gauss-Seidel relaxation, used by mgfas

solve on coarsest grid, used by mgfas

applies nonlinear operator, used by mgfas

utility used by mgfas

utility used by mgfas

utility used by mgfas

20.1

20.2

20.3

20.3

20.3

20.4

20.4

20.4

20.4

20.5

20.5

20.5

20.6

20.6

20.6

20.6

20.6

20.6

20.6

machar

igray

icrc1

icrc

decchk

hufmak

hufapp

hufenc

hufdec

arcmak

arcode

arcsum

mpops

mpmul

mpinv

mpdiv

mpsqrt

mp2dfr

mppi

diagnose computer’s floating arithmetic

Gray code and its inverse

cyclic redundancy checksum, used by icrc

cyclic redundancy checksum

decimal check digit calculation or verification

construct a Huffman code

append bits to a Huffman code, used by hufmak

use Huffman code to encode and compress a character

use Huffman code to decode and decompress a character

construct an arithmetic code

encode or decode a character using arithmetic coding

add integer to byte string, used by arcode

multiple precision arithmetic, simpler operations

multiple precision multiply, using FFT methods

multiple precision reciprocal

multiple precision divide and remainder

multiple precision square root

multiple precision conversion to decimal base

multiple precision example, compute many digits of ?

Chapter 1.

Preliminaries

1.0 Introduction

This book, like its predecessor edition, is supposed to teach you methods of

numerical computing that are practical, efficient, and (insofar as possible) elegant.

We presume throughout this book that you, the reader, have particular tasks that you

want to get done. We view our job as educating you on how to proceed. Occasionally

we may try to reroute you briefly onto a particularly beautiful side road; but by and

large, we will guide you along main highways that lead to practical destinations.

Throughout this book, you will find us fearlessly editorializing, telling you

what you should and shouldn’t do. This prescriptive tone results from a conscious

decision on our part, and we hope that you will not find it irritating. We do not

claim that our advice is infallible! Rather, we are reacting against a tendency, in

the textbook literature of computation, to discuss every possible method that has

ever been invented, without ever offering a practical judgment on relative merit. We

do, therefore, offer you our practical judgments whenever we can. As you gain

experience, you will form your own opinion of how reliable our advice is.

We presume that you are able to read computer programs in C, that being

the language of this version of Numerical Recipes (Second Edition). The book

Numerical Recipes in FORTRAN (Second Edition) is separately available, if you

prefer to program in that language. Earlier editions of Numerical Recipes in Pascal

and Numerical Recipes Routines and Examples in BASIC are also available; while

not containing the additional material of the Second Edition versions in C and

FORTRAN, these versions are perfectly serviceable if Pascal or BASIC is your

language of choice.

When we include programs in the text, they look like this:

#include

#define RAD (3.14159265/180.0)

void flmoon(int n, int nph, long *jd, float *frac)

Our programs begin with an introductory comment summarizing their purpose and explaining

their calling sequence. This routine calculates the phases of the moon. Given an integer n and

a code nph for the phase desired (nph = 0 for new moon, 1 for ?rst quarter, 2 for full, 3 for last

quarter), the routine returns the Julian Day Number jd, and the fractional part of a day frac

to be added to it, of the nth such phase since January, 1900. Greenwich Mean Time is assumed.

{

void nrerror(char error_text[]);

int i;

float am,as,c,t,t2,xtra;

This is how we comment an individual

line.

c=n+nph/4.0;

1

2

Chapter 1.

Preliminaries

t=c/1236.85;

t2=t*t;

as=359.2242+29.105356*c;

You aren’t really intended to understand

am=306.0253+385.816918*c+0.010730*t2;

this algorithm, but it does work!

*jd=2415020+28L*n+7L*nph;

xtra=0.75933+1.53058868*c+((1.178e-4)-(1.55e-7)*t)*t2;

if (nph == 0 || nph == 2)

xtra += (0.1734-3.93e-4*t)*sin(RAD*as)-0.4068*sin(RAD*am);

else if (nph == 1 || nph == 3)

xtra += (0.1721-4.0e-4*t)*sin(RAD*as)-0.6280*sin(RAD*am);

else nrerror("nph is unknown in flmoon");

This is how we will indicate error

i=(int)(xtra >= 0.0 ? floor(xtra) : ceil(xtra-1.0));

conditions.

*jd += i;

*frac=xtra-i;

}

If the syntax of the function definition above looks strange to you, then you are

probably used to the older Kernighan and Ritchie (“K&R”) syntax, rather than that of

the newer ANSI C. In this edition, we adopt ANSI C as our standard. You might want

to look ahead to §1.2 where ANSI C function prototypes are discussed in more detail.

Note our convention of handling all errors and exceptional cases with a statement

like nrerror("some error message");. The function nrerror() is part of a

small file of utility programs, nrutil.c, listed in Appendix B at the back of the

book. This Appendix includes a number of other utilities that we will describe later in

this chapter. Function nrerror() prints the indicated error message to your stderr

device (usually your terminal screen), and then invokes the function exit(), which

terminates execution. The function exit() is in every C library we know of; but if

you find it missing, you can modify nrerror() so that it does anything else that will

halt execution. For example, you can have it pause for input from the keyboard, and

then manually interrupt execution. In some applications, you will want to modify

nrerror() to do more sophisticated error handling, for example to transfer control

somewhere else, with an error flag or error code set.

We will have more to say about the C programming language, its conventions

and style, in §1.1 and §1.2.

Computational Environment and Program Validation

Our goal is that the programs in this book be as portable as possible, across

different platforms (models of computer), across different operating systems, and

across different C compilers. C was designed with this type of portability in

mind. Nevertheless, we have found that there is no substitute for actually checking

all programs on a variety of compilers, in the process uncovering differences in

library structure or contents, and even occasional differences in allowed syntax. As

surrogates for the large number of possible combinations, we have tested all the

programs in this book on the combinations of machines, operating systems, and

compilers shown on the accompanying table. More generally, the programs should

run without modification on any compiler that implements the ANSI C standard,

as described for example in Harbison and Steele’s excellent book [1]. With small

modifications, our programs should run on any compiler that implements the older,

de facto K&R standard [2]. An example of the kind of trivial incompatibility to

watch out for is that ANSI C requires the memory allocation functions malloc()

3

1.0 Introduction

Tested Machines and Compilers

Hardware

O/S Version

Compiler Version

IBM PC compatible 486/33

MS-DOS 5.0/Windows 3.1

Microsoft C/C++ 7.0

IBM PC compatible 486/33

MS-DOS 5.0

Borland C/C++ 2.0

IBM RS/6000

AIX 3.2

IBM xlc 1.02

DECstation 5000/25

ULTRIX 4.2A

CodeCenter (Saber) C 3.1.1

DECsystem 5400

ULTRIX 4.1

GNU C Compiler 2.1

Sun SPARCstation 2

SunOS 4.1

GNU C Compiler 1.40

DECstation 5000/200

ULTRIX 4.2

DEC RISC C 2.1*

Sun SPARCstation 2

SunOS 4.1

Sun cc 1.1*

*compiler version does not fully implement ANSI C; only K&R validated

and free() to be declared via the header stdlib.h; some older compilers require

them to be declared with the header file malloc.h, while others regard them as

inherent in the language and require no header file at all.

In validating the programs, we have taken the program source code directly

from the machine-readable form of the book’s manuscript, to decrease the chance

of propagating typographical errors. “Driver” or demonstration programs that we

used as part of our validations are available separately as the Numerical Recipes

Example Book (C), as well as in machine-readable form. If you plan to use more

than a few of the programs in this book, or if you plan to use programs in this book

on more than one different computer, then you may find it useful to obtain a copy

of these demonstration programs.

Of course we would be foolish to claim that there are no bugs in our programs,

and we do not make such a claim. We have been very careful, and have benefitted

from the experience of the many readers who have written to us. If you find a new

bug, please document it and tell us!

Compatibility with the First Edition

If you are accustomed to the Numerical Recipes routines of the First Edition, rest

assured: almost all of them are still here, with the same names and functionalities,

often with major improvements in the code itself. In addition, we hope that you

will soon become equally familiar with the added capabilities of the more than 100

routines that are new to this edition.

We have retired a small number of First Edition routines, those that we believe

to be clearly dominated by better methods implemented in this edition. A table,

following, lists the retired routines and suggests replacements.

First Edition users should also be aware that some routines common to

both editions have alterations in their calling interfaces, so are not directly “plug

compatible.” A fairly complete list is: chsone, chstwo, covsrt, dfpmin, laguer,

lfit, memcof, mrqcof, mrqmin, pzextr, ran4, realft, rzextr, shoot, shootf.

There may be others (depending in part on which printing of the First Edition is taken

for the comparison). If you have written software of any appreciable complexity

4

Chapter 1.

Preliminaries

Previous Routines Omitted from This Edition

Name(s)

Replacement(s)

Comment

adi

mglin or mgfas

better method

cosft

cosft1 or cosft2

choice of boundary conditions

cel, el2

rf, rd, rj, rc

better algorithms

des, desks

ran4 now uses psdes

was too slow

mdian1, mdian2

select, selip

more general

qcksrt

sort

name change (sort is now hpsort)

rkqc

rkqs

better method

smooft

use convlv with coefficients from savgol

sparse

linbcg

more general

that is dependent on First Edition routines, we do not recommend blindly replacing

them by the corresponding routines in this book. We do recommend that any new

programming efforts use the new routines.

About References

You will find references, and suggestions for further reading, listed at the

end of most sections of this book. References are cited in the text by bracketed

numbers like this [3].

Because computer algorithms often circulate informally for quite some time

before appearing in a published form, the task of uncovering “primary literature”

is sometimes quite difficult. We have not attempted this, and we do not pretend

to any degree of bibliographical completeness in this book. For topics where a

substantial secondary literature exists (discussion in textbooks, reviews, etc.) we

have consciously limited our references to a few of the more useful secondary

sources, especially those with good references to the primary literature. Where the

existing secondary literature is insufficient, we give references to a few primary

sources that are intended to serve as starting points for further reading, not as

complete bibliographies for the field.

The order in which references are listed is not necessarily significant. It reflects a

compromise between listing cited references in the order cited, and listing suggestions

for further reading in a roughly prioritized order, with the most useful ones first.

The remaining three sections of this chapter review some basic concepts of

programming (control structures, etc.), discuss a set of conventions specific to C

that we have adopted in this book, and introduce some fundamental concepts in

numerical analysis (roundoff error, etc.). Thereafter, we plunge into the substantive

material of the book.

CITED REFERENCES AND FURTHER READING:

Harbison, S.P., and Steele, G.L., Jr. 1991, C: A Reference Manual, 3rd ed. (Englewood Cliffs,

NJ: Prentice-Hall). [1]

1.1 Program Organization and Control Structures

5

Kernighan, B., and Ritchie, D. 1978, The C Programming Language (Englewood Cliffs, NJ:

Prentice-Hall). [2] [Reference for K&R “traditional” C. Later editions of this book conform

to the ANSI C standard.]

Meeus, J. 1982, Astronomical Formulae for Calculators, 2nd ed., revised and enlarged (Richmond, VA: Willmann-Bell). [3]

1.1 Program Organization and Control

Structures

We sometimes like to point out the close analogies between computer programs,

on the one hand, and written poetry or written musical scores, on the other. All

three present themselves as visual media, symbols on a two-dimensional page or

computer screen. Yet, in all three cases, the visual, two-dimensional, frozen-in-time

representation communicates (or is supposed to communicate) something rather

different, namely a process that unfolds in time. A poem is meant to be read; music,

played; a program, executed as a sequential series of computer instructions.

In all three cases, the target of the communication, in its visual form, is a human

being. The goal is to transfer to him/her, as efficiently as can be accomplished,

the greatest degree of understanding, in advance, of how the process will unfold in

time. In poetry, this human target is the reader. In music, it is the performer. In

programming, it is the program user.

Now, you may object that the target of communication of a program is not

a human but a computer, that the program user is only an irrelevant intermediary,

a lackey who feeds the machine. This is perhaps the case in the situation where

the business executive pops a diskette into a desktop computer and feeds that

computer a black-box program in binary executable form. The computer, in this

case, doesn’t much care whether that program was written with “good programming

practice” or not.

We envision, however, that you, the readers of this book, are in quite a different

situation. You need, or want, to know not just what a program does, but also how

it does it, so that you can tinker with it and modify it to your particular application.

You need others to be able to see what you have done, so that they can criticize or

admire. In such cases, where the desired goal is maintainable or reusable code, the

targets of a program’s communication are surely human, not machine.

One key to achieving good programming practice is to recognize that programming, music, and poetry — all three being symbolic constructs of the human

brain — are naturally structured into hierarchies that have many different nested

levels. Sounds (phonemes) form small meaningful units (morphemes) which in turn

form words; words group into phrases, which group into sentences; sentences make

paragraphs, and these are organized into higher levels of meaning. Notes form

musical phrases, which form themes, counterpoints, harmonies, etc.; which form

movements, which form concertos, symphonies, and so on.

The structure in programs is equally hierarchical. Appropriately, good programming practice brings different techniques to bear on the different levels [1-3].

At a low level is the ascii character set. Then, constants, identifiers, operands,

6

Chapter 1.

Preliminaries

operators. Then program statements, like a[j+1]=b+c/3.0;. Here, the best programming advice is simply be clear, or (correspondingly) don’t be too tricky. You

might momentarily be proud of yourself at writing the single line

k=(2-j)*(1+3*j)/2;

if you want to permute cyclically one of the values j = (0, 1, 2) into respectively

k = (1, 2, 0). You will regret it later, however, when you try to understand that

line. Better, and likely also faster, is

k=j+1;

if (k == 3) k=0;

Many programming stylists would even argue for the ploddingly literal

switch (j) {

case 0: k=1; break;

case 1: k=2; break;

case 2: k=0; break;

default: {

fprintf(stderr,"unexpected value for j");

exit(1);

}

}

on the grounds that it is both clear and additionally safeguarded from wrong assumptions about the possible values of j. Our preference among the implementations

is for the middle one.

In this simple example, we have in fact traversed several levels of hierarchy:

Statements frequently come in “groups” or “blocks” which make sense only taken

as a whole. The middle fragment above is one example. Another is

swap=a[j];

a[j]=b[j];

b[j]=swap;

which makes immediate sense to any programmer as the exchange of two variables,

while

ans=sum=0.0;

n=1;

is very likely to be an initialization of variables prior to some iterative process. This

level of hierarchy in a program is usually evident to the eye. It is good programming

practice to put in comments at this level, e.g., “initialize” or “exchange variables.”

The next level is that of control structures. These are things like the switch

construction in the example above, for loops, and so on. This level is sufficiently

important, and relevant to the hierarchical level of the routines in this book, that

we will come back to it just below.

At still higher levels in the hierarchy, we have functions and modules, and the

whole “global” organization of the computational task to be done. In the musical

analogy, we are now at the level of movements and complete works. At these levels,

1.1 Program Organization and Control Structures

7

modularization and encapsulation become important programming concepts, the

general idea being that program units should interact with one another only through

clearly defined and narrowly circumscribed interfaces. Good modularization practice

is an essential prerequisite to the success of large, complicated software projects,

especially those employing the efforts of more than one programmer. It is also good

practice (if not quite as essential) in the less massive programming tasks that an

individual scientist, or reader of this book, encounters.

Some computer languages, such as Modula-2 and C++, promote good modularization with higher-level language constructs absent in C. In Modula-2, for example,

functions, type definitions, and data structures can be encapsulated into “modules”

that communicate through declared public interfaces and whose internal workings

are hidden from the rest of the program [4]. In the C++ language, the key concept

is “class,” a user-definable generalization of data type that provides for data hiding,

automatic initialization of data, memory management, dynamic typing, and operator

overloading (i.e., the user-definable extension of operators like + and * so as to be

appropriate to operands in any particular class) [5]. Properly used in defining the data

structures that are passed between program units, classes can clarify and circumscribe

these units’ public interfaces, reducing the chances of programming error and also

allowing a considerable degree of compile-time and run-time error checking.

Beyond modularization, though depending on it, lie the concepts of objectoriented programming. Here a programming language, such as C++ or Turbo Pascal

5.5 [6], allows a module’s public interface to accept redefinitions of types or actions,

and these redefinitions become shared all the way down through the module’s

hierarchy (so-called polymorphism). For example, a routine written to invert a

matrix of real numbers could — dynamically, at run time — be made able to handle

complex numbers by overloading complex data types and corresponding definitions

of the arithmetic operations. Additional concepts of inheritance (the ability to define

a data type that “inherits” all the structure of another type, plus additional structure

of its own), and object extensibility (the ability to add functionality to a module

without access to its source code, e.g., at run time), also come into play.

We have not attempted to modularize, or make objects out of, the routines in

this book, for at least two reasons. First, the chosen language, C, does not really make

this possible. Second, we envision that you, the reader, might want to incorporate

the algorithms in this book, a few at a time, into modules or objects with a structure

of your own choosing. There does not exist, at present, a standard or accepted set

of “classes” for scientific object-oriented computing. While we might have tried to

invent such a set, doing so would have inevitably tied the algorithmic content of the

book (which is its raison d’?etre) to some rather specific, and perhaps haphazard, set

of choices regarding class definitions.

On the other hand, we are not unfriendly to the goals of modular and objectoriented programming. Within the limits of C, we have therefore tried to structure

our programs to be “object friendly.” That is one reason we have adopted ANSI

C with its function prototyping as our default C dialect (see §1.2). Also, within

our implementation sections, we have paid particular attention to the practices of

structured programming, as we now discuss.

8

Chapter 1.

Preliminaries

Control Structures

An executing program unfolds in time, but not strictly in the linear order in

which the statements are written. Program statements that affect the order in which

statements are executed, or that affect whether statements are executed, are called

control statements. Control statements never make useful sense by themselves. They

make sense only in the context of the groups or blocks of statements that they in turn

control. If you think of those blocks as paragraphs containing sentences, then the

control statements are perhaps best thought of as the indentation of the paragraph

and the punctuation between the sentences, not the words within the sentences.

We can now say what the goal of structured programming is. It is to make

program control manifestly apparent in the visual presentation of the program. You

see that this goal has nothing at all to do with how the computer sees the program.

As already remarked, computers don’t care whether you use structured programming

or not. Human readers, however, do care. You yourself will also care, once you

discover how much easier it is to perfect and debug a well-structured program than

one whose control structure is obscure.

You accomplish the goals of structured programming in two complementary

ways. First, you acquaint yourself with the small number of essential control

structures that occur over and over again in programming, and that are therefore

given convenient representations in most programming languages. You should learn

to think about your programming tasks, insofar as possible, exclusively in terms of

these standard control structures. In writing programs, you should get into the habit

of representing these standard control structures in consistent, conventional ways.

“Doesn’t this inhibit creativity?” our students sometimes ask. Yes, just

as Mozart’s creativity was inhibited by the sonata form, or Shakespeare’s by the

metrical requirements of the sonnet. The point is that creativity, when it is meant to

communicate, does well under the inhibitions of appropriate restrictions on format.

Second, you avoid, insofar as possible, control statements whose controlled

blocks or objects are difficult to discern at a glance. This means, in practice, that you

must try to avoid named labels on statements and goto’s. It is not the goto’s that

are dangerous (although they do interrupt one’s reading of a program); the named

statement labels are the hazard. In fact, whenever you encounter a named statement

label while reading a program, you will soon become conditioned to get a sinking

feeling in the pit of your stomach. Why? Because the following questions will, by

habit, immediately spring to mind: Where did control come from in a branch to this

label? It could be anywhere in the routine! What circumstances resulted in a branch

to this label? They could be anything! Certainty becomes uncertainty, understanding

dissolves into a morass of possibilities.

Some examples are now in order to make these considerations more concrete

(see Figure 1.1.1).

Catalog of Standard Structures

Iteration.

In C, simple iteration is performed with a for loop, for example

for (j=2;j 3)

if (a > 3) b += 1;

else b -= 1;

/* questionable! */

As judged by the indentation used on successive lines, the intent of the writer of

this code is the following: ‘If b is greater than 3 and a is greater than 3, then

increment b. If b is not greater than 3, then decrement b.’ According to the rules

of C, however, the actual meaning is ‘If b is greater than 3, then evaluate a. If a is

greater than 3, then increment b, and if a is less than or equal to 3, decrement b.’ The

point is that an else clause is associated with the most recent open if statement,

no matter how you lay it out on the page. Such confusions in meaning are easily

resolved by the inclusion of braces. They may in some instances be technically

superfluous; nevertheless, they clarify your intent and improve the program. The

above fragment should be written as

if (b > 3) {

if (a > 3) b += 1;

} else {

b -= 1;

}

Here is a working program that consists dominantly of if control statements:

#include

#define IGREG (15+31L*(10+12L*1582))

Gregorian Calendar adopted Oct. 15, 1582.

long julday(int mm, int id, int iyyy)

In this routine julday returns the Julian Day Number that begins at noon of the calendar date

speci?ed by month mm, day id, and year iyyy, all integer variables. Positive year signi?es A.D.;

negative, B.C. Remember that the year after 1 B.C. was 1 A.D.

{

void nrerror(char error_text[]);

12

Chapter 1.

Preliminaries

long jul;

int ja,jy=iyyy,jm;

if (jy == 0) nrerror("julday: there is no year zero.");

if (jy < 0) ++jy;

if (mm > 2) {

Here is an example of a block IF-structure.

jm=mm+1;

} else {

--jy;

jm=mm+13;

}

jul = (long) (floor(365.25*jy)+floor(30.6001*jm)+id+1720995);

if (id+31L*(mm+12L*iyyy) >= IGREG) {

Test whether to change to Gregorian Calja=(int)(0.01*jy);

endar.

jul += 2-ja+(int) (0.25*ja);

}

return jul;

}

(Astronomers number each 24-hour period, starting and ending at noon, with

a unique integer, the Julian Day Number [7]. Julian Day Zero was a very long

time ago; a convenient reference point is that Julian Day 2440000 began at noon

of May 23, 1968. If you know the Julian Day Number that begins at noon of a

given calendar date, then the day of the week of that date is obtained by adding

1 and taking the result modulo base 7; a zero answer corresponds to Sunday, 1 to

Monday, . . . , 6 to Saturday.)

While iteration. Most languages (though not FORTRAN, incidentally) provide

for structures like the following C example:

while (n < 1000) {

n *= 2;

j += 1;

}

It is the particular feature of this structure that the control-clause (in this case

n < 1000) is evaluated before each iteration. If the clause is not true, the enclosed

statements will not be executed. In particular, if this code is encountered at a time

when n is greater than or equal to 1000, the statements will not even be executed once.

Do-While iteration. Companion to the while iteration is a related controlstructure that tests its control-clause at the end of each iteration. In C, it looks

like this:

do {

n *= 2;

j += 1;

} while (n < 1000);

In this case, the enclosed statements will be executed at least once, independent

of the initial value of n.

Break. In this case, you have a loop that is repeated indefinitely until some

condition tested somewhere in the middle of the loop (and possibly tested in more

1.1 Program Organization and Control Structures

13

than one place) becomes true. At that point you wish to exit the loop and proceed

with what comes after it. In C the structure is implemented with the simple break

statement, which terminates execution of the innermost for, while, do, or switch

construction and proceeds to the next sequential instruction. (In Pascal and standard

FORTRAN, this structure requires the use of statement labels, to the detriment of clear

programming.) A typical usage of the break statement is:

for(;;) {

[statements before the test]

if (...) break;

[statements after the test]

}

[next sequential instruction]

Here is a program that uses several different iteration structures. One of us was

once asked, for a scavenger hunt, to find the date of a Friday the 13th on which the

moon was full. This is a program which accomplishes that task, giving incidentally

all other Fridays the 13th as a by-product.

#include

#include

#define ZON -5.0

#define IYBEG 1900

#define IYEND 2000

Time zone ?5 is Eastern Standard Time.

The range of dates to be searched.

int main(void) /* Program badluk */

{

void flmoon(int n, int nph, long *jd, float *frac);

long julday(int mm, int id, int iyyy);

int ic,icon,idwk,im,iyyy,n;

float timzon = ZON/24.0,frac;

long jd,jday;

printf("\nFull moons on Friday the 13th from %5d to %5d\n",IYBEG,IYEND);

for (iyyy=IYBEG;iyyy= jd ? 1 : -1);

if (ic == (-icon)) break;

Another break, case of no match.

icon=ic;

n += ic;

}

}

}

}

}

return 0;

}

If you are merely curious, there were (or will be) occurrences of a full moon

on Friday the 13th (time zone GMT?5) on: 3/13/1903, 10/13/1905, 6/13/1919,

1/13/1922, 11/13/1970, 2/13/1987, 10/13/2000, 9/13/2019, and 8/13/2049.

Other “standard” structures.

Our advice is to avoid them. Every

programming language has some number of “goodies” that the designer just couldn’t

resist throwing in. They seemed like a good idea at the time. Unfortunately they

don’t stand the test of time! Your program becomes difficult to translate into other

languages, and difficult to read (because rarely used structures are unfamiliar to the

reader). You can almost always accomplish the supposed conveniences of these

structures in other ways.

In C, the most problematic control structure is the switch...case...default

construction (see Figure 1.1.1), which has historically been burdened by uncertainty,

from compiler to compiler, about what data types are allowed in its control expression.

Data types char and int are universally supported. For other data types, e.g., float

or double, the structure should be replaced by a more recognizable and translatable

if. . .else construction. ANSI C allows the control expression to be of type long,

but many older compilers do not.

The continue; construction, while benign, can generally be replaced by an

if construction with no loss of clarity.

About “Advanced Topics”

Material set in smaller type, like this, signals an “advanced topic,” either one outside of

the main argument of the chapter, or else one requiring of you more than the usual assumed

mathematical background, or else (in a few cases) a discussion that is more speculative or an

algorithm that is less well-tested. Nothing important will be lost if you skip the advanced

topics on a first reading of the book.

You may have noticed that, by its looping over the months and years, the program badluk

avoids using any algorithm for converting a Julian Day Number back into a calendar date. A

routine for doing just this is not very interesting structurally, but it is occasionally useful:

#include

#define IGREG 2299161

void caldat(long julian, int *mm, int *id, int *iyyy)

Inverse of the function julday given above. Here julian is input as a Julian Day Number,

and the routine outputs mm,id, and iyyy as the month, day, and year on which the speci?ed

Julian Day started at noon.

{

long ja,jalpha,jb,jc,jd,je;

1.2 Some C Conventions for Scientific Computing

15

if (julian >= IGREG) {

Cross-over to Gregorian Calendar produces this correcjalpha=(long)(((float) (julian-1867216)-0.25)/36524.25);

tion.

ja=julian+1+jalpha-(long) (0.25*jalpha);

} else if (julian < 0) {

Make day number positive by adding integer number of

ja=julian+36525*(1-julian/36525);

Julian centuries, then subtract them o?

} else

at the end.

ja=julian;

jb=ja+1524;

jc=(long)(6680.0+((float) (jb-2439870)-122.1)/365.25);

jd=(long)(365*jc+(0.25*jc));

je=(long)((jb-jd)/30.6001);

*id=jb-jd-(long) (30.6001*je);

*mm=je-1;

if (*mm > 12) *mm -= 12;

*iyyy=jc-4715;

if (*mm > 2) --(*iyyy);

if (*iyyy

×òîáû íå âèäåòü íèêàêóþ ðåêëàìó íà ñàéòå, íóæíî ñòàòü

**VIP**-ïîëüçîâàòåëåì.

Ýòî ìîæíî ñäåëàòü ñîâåðøåííî áåñïëàòíî. ×èòàéòå ïîäðîáíîñòè òóò.