0item(s)

Sie haben keine Artikel im Warenkorb.

Product was successfully added to your shopping cart.
Andreas Behr

Theory of Sample Surveys with R

Verfügbarkeit: Auf Lager

Lieferzeit: 2-3 Tage

EAN/ISBN
9783838543284
1. 2015

Details

Das englischsprachige Buch vermittelt Kenntnisse der wichtigsten Methoden der modernen design-basierten Stichprobentheorie. Alle Methoden werden mit Hilfe von numerischen Beispielen illustriert und deren Umsetzung mit Hilfe der statistischen Software R dargestellt. Zahlreiche empirische Beispiele und Simulationen helfen, die Eigenschaften von Schätzfunktionen zu beurteilen.

This textbook provides an up-to-date treatment of modern design based theory of survey sampling. All methods are illustrated with numerical examples and applied using the statistical software R. Numerous empirical examples and simulations provide insights into the properties of estimation functions.
  • Andreas Behr: Theory of Sample Surveys with R3
  • Preface5
  • Contents7
  • List of Figures13
  • 1 Introduction15
  • 1.1 Sources of randomness16
  • 1.1.1 Stochastic model16
  • 1.1.2 Design approach17
  • 1.2 Surveys17
  • 1.2.1 Characteristics of surveys17
  • 1.2.2 Sampling frame19
  • 1.2.3 Probability sampling19
  • 1.2.4 Sampling and inference19
  • 1.3 Specific designs20
  • 1.3.1 Simple random sampling20
  • 1.3.2 Stratified sampling20
  • 1.3.3 Cluster sampling21
  • 1.4 Outline of the book21
  • 1.5 Exercises23
  • 2 Introduction to R25
  • 2.1 Some R basics26
  • 2.1.1 Object orientation26
  • 2.1.2 Dataframes26
  • 2.1.3 Sequences, replications, conditions and loops27
  • 2.1.4 Matrices30
  • 2.1.5 Storing and reading data files31
  • 2.1.6 Probability distributions32
  • 2.1.7 Graphics33
  • 2.1.8 Linear regression34
  • 2.2 Sampling from a population36
  • 2.2.1 Enumeration of samples36
  • 2.2.2 The sample() function37
  • 2.3 Exercises38
  • 3 Inclusion probabilities41
  • 3.1 Introduction42
  • 3.2 Some notation42
  • 3.3 Inclusion indicator I43
  • 3.4 A small example44
  • 3.5 Inclusion probabilities π45
  • 3.6 Obtaining inclusion probabilities with R46
  • 3.7 Simple random sampling (SI)47
  • 3.8 Properties of the inclusion indicator50
  • 3.8.1 The expected value of the inclusion indicator51
  • 3.8.2 The variance of the inclusion indicator51
  • 3.8.3 The covariance of the inclusion indicator51
  • 3.8.4 Properties of the covariance52
  • 3.8.5 Covariance matrix and sums of sums53
  • 3.9 Exercises54
  • 4 Estimation57
  • 4.1 Introduction58
  • 4.2 Estimating functions and estimators58
  • 4.3 Properties of estimation functions58
  • 4.4 The π-estimator59
  • 4.4.1 Properties of the π-estimator59
  • 4.4.2 Expected value and variance of the π-estimator59
  • 4.4.3 An alternative expression of the variance62
  • 4.4.4 The Yates-Grundy variance of the total62
  • 4.5 Estimation using R64
  • 4.5.1 A small numerical example64
  • 4.5.2 An empirical example: PSID68
  • 4.6 Generating samples with unequal inclusion probabilities70
  • 4.6.1 Probabilities proportional to size (PPS)70
  • 4.6.2 The Sampford-algorithm72
  • 4.7 Exercises73
  • 5 Simple sampling75
  • 5.1 Introduction76
  • 5.2 Some general estimation functions76
  • 5.2.1 The π-estimator for the total76
  • 5.2.2 The π-estimator for the mean76
  • 5.2.3 The π-estimator for a proportion77
  • 5.3 Simple random sampling77
  • 5.3.1 The π-estimator for the total (SI)78
  • 5.3.2 The π-estimator for the mean (SI)79
  • 5.3.3 The π-estimator for a proportion (SI)80
  • 5.4 Some examples using R81
  • 5.5 Exercises86
  • 6 Confidence intervals89
  • 6.1 Introduction90
  • 6.2 Chebyshev-inequality92
  • 6.2.1 Derivation of the Chebyshev-inequality92
  • 6.2.2 Application of the Chebyshev-inequality94
  • 6.3 Confidence intervals based on a specific sample94
  • 6.4 Some general remarks97
  • 6.4.1 No approximate normality97
  • 6.4.2 Simplified variance estimators97
  • 6.4.3 Effect of simplification in the simple random sampling case98
  • 6.4.4 Effect of simplification for the general ˇ-estimator99
  • 6.4.5 Effect of simplification in stratified and clustered samples99
  • 6.4.6 Bootstrap100
  • 6.5 Exercises101
  • 7 Stratified sampling103
  • 7.1 Introduction104
  • 7.2 Some notation and an example104
  • 7.2.1 Notation104
  • 7.2.2 Example: Sectors of employment as strata105
  • 7.3 Estimation of the total108
  • 7.3.1 Simple random sampling within strata108
  • 7.3.2 Example: simple random sampling within sectors110
  • 7.4 Choosing the sample size for individual strata110
  • 7.4.1 The minimization problem111
  • 7.4.2 The Cauchy-Schwarz inequality112
  • 7.4.3 Solving the minimization problem113
  • 7.4.4 Example: optimal sampling size within sectors115
  • 7.5 Sample allocation and efficiency116
  • 7.5.1 Variance comparisons based on the variance decomposition117
  • 7.5.2 No stratification versus proportional allocation118
  • 7.5.3 Proportional allocation versus optimal allocation118
  • 7.5.4 No stratification versus optimal allocation119
  • 7.5.5 An efficiency comparison with R120
  • 7.6 Exercises123
  • 8 Cluster sampling125
  • 8.1 Introduction126
  • 8.2 Notation126
  • 8.2.1 Clustering the population126
  • 8.2.2 Artificially clustering the PSID sample127
  • 8.2.3 Sampling clusters127
  • 8.2.4 Inclusion probabilities128
  • 8.3 Estimating the population total129
  • 8.3.1 The π-estimator of the population total129
  • 8.3.2 Variance of the π-estimator of the population total130
  • 8.4 Simple random sampling of clusters (SIC)131
  • 8.4.1 The π-estimator of the population total131
  • 8.4.2 The π-estimator in the PSID example131
  • 8.4.3 Variance of the π-estimator of the population total132
  • 8.4.4 Variance of the mean estimator in the PSID example133
  • 8.5 Exercises135
  • 9 Auxiliary variables137
  • 9.1 Introduction138
  • 9.2 The ratio estimator139
  • 9.2.1 Example of the ratio estimator using PSID data140
  • 9.2.2 Taylor series expansion142
  • 9.2.3 The approximate variance of the ratio estimator145
  • 9.2.4 Estimating the approximate variance of the ratio estimator using PSID data146
  • 9.2.5 Comparison of the ratio estimator with the simple π-estimator148
  • 9.2.6 The ratio estimator in the regression context149
  • 9.2.7 The linear regression model under specific heteroskedasticity assumption151
  • 9.3 The difference estimator152
  • 9.3.1 The difference estimator using regression notation152
  • 9.3.2 Properties of the difference estimator153
  • 9.3.3 The difference estimator of average wage using the PSIDdata154
  • 9.4 Exercises158
  • 10 Regression161
  • 10.1 Introduction162
  • 10.1.1 Regression without intercept162
  • 10.1.2 Regression with intercept165
  • 10.1.3 Multivariate linear regression with intercept167
  • 10.2 Variance of the parameter estimators169
  • 10.2.1 Linear approximation of the π-estimator169
  • 10.2.2 The variance of the linear approximation of the π-estimator171
  • 10.2.3 Simple regression through the origin175
  • 10.2.4 Simple regression with intercept176
  • 10.2.5 Simple regression with intercept and simple random sampling177
  • 10.2.6 Wage regression with PSID data and simple random sampling177
  • 10.3 Exercises180
  • Indices183
  • Functions Index183
  • Subject Index185