aboutsummaryrefslogtreecommitdiffstats
path: root/doc/texinfo.tex
blob: b5f31415771ddd10ffcb1b36ec4d6c2c4f7f68a7 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
2308
2309
2310
2311
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
2322
2323
2324
2325
2326
2327
2328
2329
2330
2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
2341
2342
2343
2344
2345
2346
2347
2348
2349
2350
2351
2352
2353
2354
2355
2356
2357
2358
2359
2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
2383
2384
2385
2386
2387
2388
2389
2390
2391
2392
2393
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
2422
2423
2424
2425
2426
2427
2428
2429
2430
2431
2432
2433
2434
2435
2436
2437
2438
2439
2440
2441
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
2452
2453
2454
2455
2456
2457
2458
2459
2460
2461
2462
2463
2464
2465
2466
2467
2468
2469
2470
2471
2472
2473
2474
2475
2476
2477
2478
2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
2489
2490
2491
2492
2493
2494
2495
2496
2497
2498
2499
2500
2501
2502
2503
2504
2505
2506
2507
2508
2509
2510
2511
2512
2513
2514
2515
2516
2517
2518
2519
2520
2521
2522
2523
2524
2525
2526
2527
2528
2529
2530
2531
2532
2533
2534
2535
2536
2537
2538
2539
2540
2541
2542
2543
2544
2545
2546
2547
2548
2549
2550
2551
2552
2553
2554
2555
2556
2557
2558
2559
2560
2561
2562
2563
2564
2565
2566
2567
2568
2569
2570
2571
2572
2573
2574
2575
2576
2577
2578
2579
2580
2581
2582
2583
2584
2585
2586
2587
2588
2589
2590
2591
2592
2593
2594
2595
2596
2597
2598
2599
2600
2601
2602
2603
2604
2605
2606
2607
2608
2609
2610
2611
2612
2613
2614
2615
2616
2617
2618
2619
2620
2621
2622
2623
2624
2625
2626
2627
2628
2629
2630
2631
2632
2633
2634
2635
2636
2637
2638
2639
2640
2641
2642
2643
2644
2645
2646
2647
2648
2649
2650
2651
2652
2653
2654
2655
2656
2657
2658
2659
2660
2661
2662
2663
2664
2665
2666
2667
2668
2669
2670
2671
2672
2673
2674
2675
2676
2677
2678
2679
2680
2681
2682
2683
2684
2685
2686
2687
2688
2689
2690
2691
2692
2693
2694
2695
2696
2697
2698
2699
2700
2701
2702
2703
2704
2705
2706
2707
2708
2709
2710
2711
2712
2713
2714
2715
2716
2717
2718
2719
2720
2721
2722
2723
2724
2725
2726
2727
2728
2729
2730
2731
2732
2733
2734
2735
2736
2737
2738
2739
2740
2741
2742
2743
2744
2745
2746
2747
2748
2749
2750
2751
2752
2753
2754
2755
2756
2757
2758
2759
2760
2761
2762
2763
2764
2765
2766
2767
2768
2769
2770
2771
2772
2773
2774
2775
2776
2777
2778
2779
2780
2781
2782
2783
2784
2785
2786
2787
2788
2789
2790
2791
2792
2793
2794
2795
2796
2797
2798
2799
2800
2801
2802
2803
2804
2805
2806
2807
2808
2809
2810
2811
2812
2813
2814
2815
2816
2817
2818
2819
2820
2821
2822
2823
2824
2825
2826
2827
2828
2829
2830
2831
2832
2833
2834
2835
2836
2837
2838
2839
2840
2841
2842
2843
2844
2845
2846
2847
2848
2849
2850
2851
2852
2853
2854
2855
2856
2857
2858
2859
2860
2861
2862
2863
2864
2865
2866
2867
2868
2869
2870
2871
2872
2873
2874
2875
2876
2877
2878
2879
2880
2881
2882
2883
2884
2885
2886
2887
2888
2889
2890
2891
2892
2893
2894
2895
2896
2897
2898
2899
2900
2901
2902
2903
2904
2905
2906
2907
2908
2909
2910
2911
2912
2913
2914
2915
2916
2917
2918
2919
2920
2921
2922
2923
2924
2925
2926
2927
2928
2929
2930
2931
2932
2933
2934
2935
2936
2937
2938
2939
2940
2941
2942
2943
2944
2945
2946
2947
2948
2949
2950
2951
2952
2953
2954
2955
2956
2957
2958
2959
2960
2961
2962
2963
2964
2965
2966
2967
2968
2969
2970
2971
2972
2973
2974
2975
2976
2977
2978
2979
2980
2981
2982
2983
2984
2985
2986
2987
2988
2989
2990
2991
2992
2993
2994
2995
2996
2997
2998
2999
3000
3001
3002
3003
3004
3005
3006
3007
3008
3009
3010
3011
3012
3013
3014
3015
3016
3017
3018
3019
3020
3021
3022
3023
3024
3025
3026
3027
3028
3029
3030
3031
3032
3033
3034
3035
3036
3037
3038
3039
3040
3041
3042
3043
3044
3045
3046
3047
3048
3049
3050
3051
3052
3053
3054
3055
3056
3057
3058
3059
3060
3061
3062
3063
3064
3065
3066
3067
3068
3069
3070
3071
3072
3073
3074
3075
3076
3077
3078
3079
3080
3081
3082
3083
3084
3085
3086
3087
3088
3089
3090
3091
3092
3093
3094
3095
3096
3097
3098
3099
3100
3101
3102
3103
3104
3105
3106
3107
3108
3109
3110
3111
3112
3113
3114
3115
3116
3117
3118
3119
3120
3121
3122
3123
3124
3125
3126
3127
3128
3129
3130
3131
3132
3133
3134
3135
3136
3137
3138
3139
3140
3141
3142
3143
3144
3145
3146
3147
3148
3149
3150
3151
3152
3153
3154
3155
3156
3157
3158
3159
3160
3161
3162
3163
3164
3165
3166
3167
3168
3169
3170
3171
3172
3173
3174
3175
3176
3177
3178
3179
3180
3181
3182
3183
3184
3185
3186
3187
3188
3189
3190
3191
3192
3193
3194
3195
3196
3197
3198
3199
3200
3201
3202
3203
3204
3205
3206
3207
3208
3209
3210
3211
3212
3213
3214
3215
3216
3217
3218
3219
3220
3221
3222
3223
3224
3225
3226
3227
3228
3229
3230
3231
3232
3233
3234
3235
3236
3237
3238
3239
3240
3241
3242
3243
3244
3245
3246
3247
3248
3249
3250
3251
3252
3253
3254
3255
3256
3257
3258
3259
3260
3261
3262
3263
3264
3265
3266
3267
3268
3269
3270
3271
3272
3273
3274
3275
3276
3277
3278
3279
3280
3281
3282
3283
3284
3285
3286
3287
3288
3289
3290
3291
3292
3293
3294
3295
3296
3297
3298
3299
3300
3301
3302
3303
3304
3305
3306
3307
3308
3309
3310
3311
3312
3313
3314
3315
3316
3317
3318
3319
3320
3321
3322
3323
3324
3325
3326
3327
3328
3329
3330
3331
3332
3333
3334
3335
3336
3337
3338
3339
3340
3341
3342
3343
3344
3345
3346
3347
3348
3349
3350
3351
3352
3353
3354
3355
3356
3357
3358
3359
3360
3361
3362
3363
3364
3365
3366
3367
3368
3369
3370
3371
3372
3373
3374
3375
3376
3377
3378
3379
3380
3381
3382
3383
3384
3385
3386
3387
3388
3389
3390
3391
3392
3393
3394
3395
3396
3397
3398
3399
3400
3401
3402
3403
3404
3405
3406
3407
3408
3409
3410
3411
3412
3413
3414
3415
3416
3417
3418
3419
3420
3421
3422
3423
3424
3425
3426
3427
3428
3429
3430
3431
3432
3433
3434
3435
3436
3437
3438
3439
3440
3441
3442
3443
3444
3445
3446
3447
3448
3449
3450
3451
3452
3453
3454
3455
3456
3457
3458
3459
3460
3461
3462
3463
3464
3465
3466
3467
3468
3469
3470
3471
3472
3473
3474
3475
3476
3477
3478
3479
3480
3481
3482
3483
3484
3485
3486
3487
3488
3489
3490
3491
3492
3493
3494
3495
3496
3497
3498
3499
3500
3501
3502
3503
3504
3505
3506
3507
3508
3509
3510
3511
3512
3513
3514
3515
3516
3517
3518
3519
3520
3521
3522
3523
3524
3525
3526
3527
3528
3529
3530
3531
3532
3533
3534
3535
3536
3537
3538
3539
3540
3541
3542
3543
3544
3545
3546
3547
3548
3549
3550
3551
3552
3553
3554
3555
3556
3557
3558
3559
3560
3561
3562
3563
3564
3565
3566
3567
3568
3569
3570
3571
3572
3573
3574
3575
3576
3577
3578
3579
3580
3581
3582
3583
3584
3585
3586
3587
3588
3589
3590
3591
3592
3593
3594
3595
3596
3597
3598
3599
3600
3601
3602
3603
3604
3605
3606
3607
3608
3609
3610
3611
3612
3613
3614
3615
3616
3617
3618
3619
3620
3621
3622
3623
3624
3625
3626
3627
3628
3629
3630
3631
3632
3633
3634
3635
3636
3637
3638
3639
3640
3641
3642
3643
3644
3645
3646
3647
3648
3649
3650
3651
3652
3653
3654
3655
3656
3657
3658
3659
3660
3661
3662
3663
3664
3665
3666
3667
3668
3669
3670
3671
3672
3673
3674
3675
3676
3677
3678
3679
3680
3681
3682
3683
3684
3685
3686
3687
3688
3689
3690
3691
3692
3693
3694
3695
3696
3697
3698
3699
3700
3701
3702
3703
3704
3705
3706
3707
3708
3709
3710
3711
3712
3713
3714
3715
3716
3717
3718
3719
3720
3721
3722
3723
3724
3725
3726
3727
3728
3729
3730
3731
3732
3733
3734
3735
3736
3737
3738
3739
3740
3741
3742
3743
3744
3745
3746
3747
3748
3749
3750
3751
3752
3753
3754
3755
3756
3757
3758
3759
3760
3761
3762
3763
3764
3765
3766
3767
3768
3769
3770
3771
3772
3773
3774
3775
3776
3777
3778
3779
3780
3781
3782
3783
3784
3785
3786
3787
3788
3789
3790
3791
3792
3793
3794
3795
3796
3797
3798
3799
3800
3801
3802
3803
3804
3805
3806
3807
3808
3809
3810
3811
3812
3813
3814
3815
3816
3817
3818
3819
3820
3821
3822
3823
3824
3825
3826
3827
3828
3829
3830
3831
3832
3833
3834
3835
3836
3837
3838
3839
3840
3841
3842
3843
3844
3845
3846
3847
3848
3849
3850
3851
3852
3853
3854
3855
3856
3857
3858
3859
3860
3861
3862
3863
3864
3865
3866
3867
3868
3869
3870
3871
3872
3873
3874
3875
3876
3877
3878
3879
3880
3881
3882
3883
3884
3885
3886
3887
3888
3889
3890
3891
3892
3893
3894
3895
3896
3897
3898
3899
3900
3901
3902
3903
3904
3905
3906
3907
3908
3909
3910
3911
3912
3913
3914
3915
3916
3917
3918
3919
3920
3921
3922
3923
3924
3925
3926
3927
3928
3929
3930
3931
3932
3933
3934
3935
3936
3937
3938
3939
3940
3941
3942
3943
3944
3945
3946
3947
3948
3949
3950
3951
3952
3953
3954
3955
3956
3957
3958
3959
3960
3961
3962
3963
3964
3965
3966
3967
3968
3969
3970
3971
3972
3973
3974
3975
3976
3977
3978
3979
3980
3981
3982
3983
3984
3985
3986
3987
3988
3989
3990
3991
3992
3993
3994
3995
3996
3997
3998
3999
4000
4001
4002
4003
4004
4005
4006
4007
4008
4009
4010
4011
4012
4013
4014
4015
4016
4017
4018
4019
4020
4021
4022
4023
4024
4025
4026
4027
4028
4029
4030
4031
4032
4033
4034
4035
4036
4037
4038
4039
4040
4041
4042
4043
4044
4045
4046
4047
4048
4049
4050
4051
4052
4053
4054
4055
4056
4057
4058
4059
4060
4061
4062
4063
4064
4065
4066
4067
4068
4069
4070
4071
4072
4073
4074
4075
4076
4077
4078
4079
4080
4081
4082
4083
4084
4085
4086
4087
4088
4089
4090
4091
4092
4093
4094
4095
4096
4097
4098
4099
4100
4101
4102
4103
4104
4105
4106
4107
4108
4109
4110
4111
4112
4113
4114
4115
4116
4117
4118
4119
4120
4121
4122
4123
4124
4125
4126
4127
4128
4129
4130
4131
4132
4133
4134
4135
4136
4137
4138
4139
4140
4141
4142
4143
4144
4145
4146
4147
4148
4149
4150
4151
4152
4153
4154
4155
4156
4157
4158
4159
4160
4161
4162
4163
4164
4165
4166
4167
4168
4169
4170
4171
4172
4173
4174
4175
4176
4177
4178
4179
4180
4181
4182
4183
4184
4185
4186
4187
4188
4189
4190
4191
4192
4193
4194
4195
4196
4197
4198
4199
4200
4201
4202
4203
4204
4205
4206
4207
4208
4209
4210
4211
4212
4213
4214
4215
4216
4217
4218
4219
4220
4221
4222
4223
4224
4225
4226
4227
4228
4229
4230
4231
4232
4233
4234
4235
4236
4237
4238
4239
4240
4241
4242
4243
4244
4245
4246
4247
4248
4249
4250
4251
4252
4253
4254
4255
4256
4257
4258
4259
4260
4261
4262
4263
4264
4265
4266
4267
4268
4269
4270
4271
4272
4273
4274
4275
4276
4277
4278
4279
4280
4281
4282
4283
4284
4285
4286
4287
4288
4289
4290
4291
4292
4293
4294
4295
4296
4297
4298
4299
4300
4301
4302
4303
4304
4305
4306
4307
4308
4309
4310
4311
4312
4313
4314
4315
4316
4317
4318
4319
4320
4321
4322
4323
4324
4325
4326
4327
4328
4329
4330
4331
4332
4333
4334
4335
4336
4337
4338
4339
4340
4341
4342
4343
4344
4345
4346
4347
4348
4349
4350
4351
4352
4353
4354
4355
4356
4357
4358
4359
4360
4361
4362
4363
4364
4365
4366
4367
4368
4369
4370
4371
4372
4373
4374
4375
4376
4377
4378
4379
4380
4381
4382
4383
4384
4385
4386
4387
4388
4389
4390
4391
4392
4393
4394
4395
4396
4397
4398
4399
4400
4401
4402
4403
4404
4405
4406
4407
4408
4409
4410
4411
4412
4413
4414
4415
4416
4417
4418
4419
4420
4421
4422
4423
4424
4425
4426
4427
4428
4429
4430
4431
4432
4433
4434
4435
4436
4437
4438
4439
4440
4441
4442
4443
4444
4445
4446
4447
4448
4449
4450
4451
4452
4453
4454
4455
4456
4457
4458
4459
4460
4461
4462
4463
4464
4465
4466
4467
4468
4469
4470
4471
4472
4473
4474
4475
4476
4477
4478
4479
4480
4481
4482
4483
4484
4485
4486
4487
4488
4489
4490
4491
4492
4493
4494
4495
4496
4497
4498
4499
4500
4501
4502
4503
4504
4505
4506
4507
4508
4509
4510
4511
4512
4513
4514
4515
4516
4517
4518
4519
4520
4521
4522
4523
4524
4525
4526
4527
4528
4529
4530
4531
4532
4533
4534
4535
4536
4537
4538
4539
4540
4541
4542
4543
4544
4545
4546
4547
4548
4549
4550
4551
4552
4553
4554
4555
4556
4557
4558
4559
4560
4561
4562
4563
4564
4565
4566
4567
4568
4569
4570
4571
4572
4573
4574
4575
4576
4577
4578
4579
4580
4581
4582
4583
4584
4585
4586
4587
4588
4589
4590
4591
4592
4593
4594
4595
4596
4597
4598
4599
4600
4601
4602
4603
4604
4605
4606
4607
4608
4609
4610
4611
4612
4613
4614
4615
4616
4617
4618
4619
4620
4621
4622
4623
4624
4625
4626
4627
4628
4629
4630
4631
4632
4633
4634
4635
4636
4637
4638
4639
4640
4641
4642
4643
4644
4645
4646
4647
4648
4649
4650
4651
4652
4653
4654
4655
4656
4657
4658
4659
4660
4661
4662
4663
4664
4665
4666
4667
4668
4669
4670
4671
4672
4673
4674
4675
4676
4677
4678
4679
4680
4681
4682
4683
4684
4685
4686
4687
4688
4689
4690
4691
4692
4693
4694
4695
4696
4697
4698
4699
4700
4701
4702
4703
4704
4705
4706
4707
4708
4709
4710
4711
4712
4713
4714
4715
4716
4717
4718
4719
4720
4721
4722
4723
4724
4725
4726
4727
4728
4729
4730
4731
4732
4733
4734
4735
4736
4737
4738
4739
4740
4741
4742
4743
4744
4745
4746
4747
4748
4749
4750
4751
4752
4753
4754
4755
4756
4757
4758
4759
4760
4761
4762
4763
4764
4765
4766
4767
4768
4769
4770
4771
4772
4773
4774
4775
4776
4777
4778
4779
4780
4781
4782
4783
4784
4785
4786
4787
4788
4789
4790
4791
4792
4793
4794
4795
4796
4797
4798
4799
4800
4801
4802
4803
4804
4805
4806
4807
4808
4809
4810
4811
4812
4813
4814
4815
4816
4817
4818
4819
4820
4821
4822
4823
4824
4825
4826
4827
4828
4829
4830
4831
4832
4833
4834
4835
4836
4837
4838
4839
4840
4841
4842
4843
4844
4845
4846
4847
4848
4849
4850
4851
4852
4853
4854
4855
4856
4857
4858
4859
4860
4861
4862
4863
4864
4865
4866
4867
4868
4869
4870
4871
4872
4873
4874
4875
4876
4877
4878
4879
4880
4881
4882
4883
4884
4885
4886
4887
4888
4889
4890
4891
4892
4893
4894
4895
4896
4897
4898
4899
4900
4901
4902
4903
4904
4905
4906
4907
4908
4909
4910
4911
4912
4913
4914
4915
4916
4917
4918
4919
4920
4921
4922
4923
4924
4925
4926
4927
4928
4929
4930
4931
4932
4933
4934
4935
4936
4937
4938
4939
4940
4941
4942
4943
4944
4945
4946
4947
4948
4949
4950
4951
4952
4953
4954
4955
4956
4957
4958
4959
4960
4961
4962
4963
4964
4965
4966
4967
4968
4969
4970
4971
4972
4973
4974
4975
4976
4977
4978
4979
4980
4981
4982
4983
4984
4985
4986
4987
4988
4989
4990
4991
4992
4993
4994
4995
4996
4997
4998
4999
5000
5001
5002
5003
5004
5005
5006
5007
5008
5009
5010
5011
5012
5013
5014
5015
5016
5017
5018
5019
5020
5021
5022
5023
5024
5025
5026
5027
5028
5029
5030
5031
5032
5033
5034
5035
5036
5037
5038
5039
5040
5041
5042
5043
5044
5045
5046
5047
5048
5049
5050
5051
5052
5053
5054
5055
5056
5057
5058
5059
5060
5061
5062
5063
5064
5065
5066
5067
5068
5069
5070
5071
5072
5073
5074
5075
5076
5077
5078
5079
5080
5081
5082
5083
5084
5085
5086
5087
5088
5089
5090
5091
5092
5093
5094
5095
5096
5097
5098
5099
5100
5101
5102
5103
5104
5105
5106
5107
5108
5109
5110
5111
5112
5113
5114
5115
5116
5117
5118
5119
5120
5121
5122
5123
5124
5125
5126
5127
5128
5129
5130
5131
5132
5133
5134
5135
5136
5137
5138
5139
5140
5141
5142
5143
5144
5145
5146
5147
5148
5149
5150
5151
5152
5153
5154
5155
5156
5157
5158
5159
5160
5161
5162
5163
5164
5165
5166
5167
5168
5169
5170
5171
5172
5173
5174
5175
5176
5177
5178
5179
5180
5181
5182
5183
5184
5185
5186
5187
5188
5189
5190
5191
5192
5193
5194
5195
5196
5197
5198
5199
5200
5201
5202
5203
5204
5205
5206
5207
5208
5209
5210
5211
5212
5213
5214
5215
5216
5217
5218
5219
5220
5221
5222
5223
5224
5225
5226
5227
5228
5229
5230
5231
5232
5233
5234
5235
5236
5237
5238
5239
5240
5241
5242
5243
5244
5245
5246
5247
5248
5249
5250
5251
5252
5253
5254
5255
5256
5257
5258
5259
5260
5261
5262
5263
5264
5265
5266
5267
5268
5269
5270
5271
5272
5273
5274
5275
5276
5277
5278
5279
5280
5281
5282
5283
5284
5285
5286
5287
5288
5289
5290
5291
5292
5293
5294
5295
5296
5297
5298
5299
5300
5301
5302
5303
5304
5305
5306
5307
5308
5309
5310
5311
5312
5313
5314
5315
5316
5317
5318
5319
5320
5321
5322
5323
5324
5325
5326
5327
5328
5329
5330
5331
5332
5333
5334
5335
5336
5337
5338
5339
5340
5341
5342
5343
5344
5345
5346
5347
5348
5349
5350
5351
5352
5353
5354
5355
5356
5357
5358
5359
5360
5361
5362
5363
5364
5365
5366
5367
5368
5369
5370
5371
5372
5373
5374
5375
5376
5377
5378
5379
5380
5381
5382
5383
5384
5385
5386
5387
5388
5389
5390
5391
5392
5393
5394
5395
5396
5397
5398
5399
5400
5401
5402
5403
5404
5405
5406
5407
5408
5409
5410
5411
5412
5413
5414
5415
5416
5417
5418
5419
5420
5421
5422
5423
5424
5425
5426
5427
5428
5429
5430
5431
5432
5433
5434
5435
5436
5437
5438
5439
5440
5441
5442
5443
5444
5445
5446
5447
5448
5449
5450
5451
5452
5453
5454
5455
5456
5457
5458
5459
5460
5461
5462
5463
5464
5465
5466
5467
5468
5469
5470
5471
5472
5473
5474
5475
5476
5477
5478
5479
5480
5481
5482
5483
5484
5485
5486
5487
5488
5489
5490
5491
5492
5493
5494
5495
5496
5497
5498
5499
5500
5501
5502
5503
5504
5505
5506
5507
5508
5509
5510
5511
5512
5513
5514
5515
5516
5517
5518
5519
5520
5521
5522
5523
5524
5525
5526
5527
5528
5529
5530
5531
5532
5533
5534
5535
5536
5537
5538
5539
5540
5541
5542
5543
5544
5545
5546
5547
5548
5549
5550
5551
5552
5553
5554
5555
5556
5557
5558
5559
5560
5561
5562
5563
5564
5565
5566
5567
5568
5569
5570
5571
5572
5573
5574
5575
5576
5577
5578
5579
5580
5581
5582
5583
5584
5585
5586
5587
5588
5589
5590
5591
5592
5593
5594
5595
5596
5597
5598
5599
5600
5601
5602
5603
5604
5605
5606
5607
5608
5609
5610
5611
5612
5613
5614
5615
5616
5617
5618
5619
5620
5621
5622
5623
5624
5625
5626
5627
5628
5629
5630
5631
5632
5633
5634
5635
5636
5637
5638
5639
5640
5641
5642
5643
5644
5645
5646
5647
5648
5649
5650
5651
5652
5653
5654
5655
5656
5657
5658
5659
5660
5661
5662
5663
5664
5665
5666
5667
5668
5669
5670
5671
5672
5673
5674
5675
5676
5677
5678
5679
5680
5681
5682
5683
5684
5685
5686
5687
5688
5689
5690
5691
5692
5693
5694
5695
5696
5697
5698
5699
5700
5701
5702
5703
5704
5705
5706
5707
5708
5709
5710
5711
5712
5713
5714
5715
5716
5717
5718
5719
5720
5721
5722
5723
5724
5725
5726
5727
5728
5729
5730
5731
5732
5733
5734
5735
5736
5737
5738
5739
5740
5741
5742
5743
5744
5745
5746
5747
5748
5749
5750
5751
5752
5753
5754
5755
5756
5757
5758
5759
5760
5761
5762
5763
5764
5765
5766
5767
5768
5769
5770
5771
5772
5773
5774
5775
5776
5777
5778
5779
5780
5781
5782
5783
5784
5785
5786
5787
5788
5789
5790
5791
5792
5793
5794
5795
5796
5797
5798
5799
5800
5801
5802
5803
5804
5805
5806
5807
5808
5809
5810
5811
5812
5813
5814
5815
5816
5817
5818
5819
5820
5821
5822
5823
5824
5825
5826
5827
5828
5829
5830
5831
5832
5833
5834
5835
5836
5837
5838
5839
5840
5841
5842
5843
5844
5845
5846
5847
5848
5849
5850
5851
5852
5853
5854
5855
5856
5857
5858
5859
5860
5861
5862
5863
5864
5865
5866
5867
5868
5869
5870
5871
5872
5873
5874
5875
5876
5877
5878
5879
5880
5881
5882
5883
5884
5885
5886
5887
5888
5889
5890
5891
5892
5893
5894
5895
5896
5897
5898
5899
5900
5901
5902
5903
5904
5905
5906
5907
5908
5909
5910
5911
5912
5913
5914
5915
5916
5917
5918
5919
5920
5921
5922
5923
5924
5925
5926
5927
5928
5929
5930
5931
5932
5933
5934
5935
5936
5937
5938
5939
5940
5941
5942
5943
5944
5945
5946
5947
5948
5949
5950
5951
5952
5953
5954
5955
5956
5957
5958
5959
5960
5961
5962
5963
5964
5965
5966
5967
5968
5969
5970
5971
5972
5973
5974
5975
5976
5977
5978
5979
5980
5981
5982
5983
5984
5985
5986
5987
5988
5989
5990
5991
5992
5993
5994
5995
5996
5997
5998
5999
6000
6001
6002
6003
6004
6005
6006
6007
6008
6009
6010
6011
6012
6013
6014
6015
6016
6017
6018
6019
6020
6021
6022
6023
6024
6025
6026
6027
6028
6029
6030
6031
6032
6033
6034
6035
6036
6037
6038
6039
6040
6041
6042
6043
6044
6045
6046
6047
6048
6049
6050
6051
6052
6053
6054
6055
6056
6057
6058
6059
6060
6061
6062
6063
6064
6065
6066
6067
6068
6069
6070
6071
6072
6073
6074
6075
6076
6077
6078
6079
6080
6081
6082
6083
6084
6085
6086
6087
6088
6089
6090
6091
6092
6093
6094
6095
6096
6097
6098
6099
6100
6101
6102
6103
6104
6105
6106
6107
6108
6109
6110
6111
6112
6113
6114
6115
6116
6117
6118
6119
6120
6121
6122
6123
6124
6125
6126
6127
6128
6129
6130
6131
6132
6133
6134
6135
6136
6137
6138
6139
6140
6141
6142
6143
6144
6145
6146
6147
6148
6149
6150
6151
6152
6153
6154
6155
6156
6157
6158
6159
6160
6161
6162
6163
6164
6165
6166
6167
6168
6169
6170
6171
6172
6173
6174
6175
6176
6177
6178
6179
6180
6181
6182
6183
6184
6185
6186
6187
6188
6189
6190
6191
6192
6193
6194
6195
6196
6197
6198
6199
6200
6201
6202
6203
6204
6205
6206
6207
6208
6209
6210
6211
6212
6213
6214
6215
6216
6217
6218
6219
6220
6221
6222
6223
6224
6225
6226
6227
6228
6229
6230
6231
6232
6233
6234
6235
6236
6237
6238
6239
6240
6241
6242
6243
6244
6245
6246
6247
6248
6249
6250
6251
6252
6253
6254
6255
6256
6257
6258
6259
6260
6261
6262
6263
6264
6265
6266
6267
6268
6269
6270
6271
6272
6273
6274
6275
6276
6277
6278
6279
6280
6281
6282
6283
6284
6285
6286
6287
6288
6289
6290
6291
6292
6293
6294
6295
6296
6297
6298
6299
6300
6301
6302
6303
6304
6305
6306
6307
6308
6309
6310
6311
6312
6313
6314
6315
6316
6317
6318
6319
6320
6321
6322
6323
6324
6325
6326
6327
6328
6329
6330
6331
6332
6333
6334
6335
6336
6337
6338
6339
6340
6341
6342
6343
6344
6345
6346
6347
6348
6349
6350
6351
6352
6353
6354
6355
6356
6357
6358
6359
6360
6361
6362
6363
6364
6365
6366
6367
6368
6369
6370
6371
6372
6373
6374
6375
6376
6377
6378
6379
6380
6381
6382
6383
6384
6385
6386
6387
6388
6389
6390
6391
6392
6393
6394
6395
6396
6397
6398
6399
6400
6401
6402
6403
6404
6405
6406
6407
6408
6409
6410
6411
6412
6413
6414
6415
6416
6417
6418
6419
6420
6421
6422
6423
6424
6425
6426
6427
6428
6429
6430
6431
6432
6433
6434
6435
6436
6437
6438
6439
6440
6441
6442
6443
6444
6445
6446
6447
6448
6449
6450
6451
6452
6453
6454
6455
6456
6457
6458
6459
6460
6461
6462
6463
6464
6465
6466
6467
6468
6469
6470
6471
6472
6473
6474
6475
6476
6477
6478
6479
6480
6481
6482
6483
6484
6485
6486
6487
6488
6489
6490
6491
6492
6493
6494
6495
6496
6497
6498
6499
6500
6501
6502
6503
6504
6505
6506
6507
6508
6509
6510
6511
6512
6513
6514
6515
6516
6517
6518
6519
6520
6521
6522
6523
6524
6525
6526
6527
6528
6529
6530
6531
6532
6533
6534
6535
6536
6537
6538
6539
6540
6541
6542
6543
6544
6545
6546
6547
6548
6549
6550
6551
6552
6553
6554
6555
6556
6557
6558
6559
6560
6561
6562
6563
6564
6565
6566
6567
6568
6569
6570
6571
6572
6573
6574
6575
6576
6577
6578
6579
6580
6581
6582
6583
6584
6585
6586
6587
6588
6589
6590
6591
6592
6593
6594
6595
6596
6597
6598
6599
6600
6601
6602
6603
6604
6605
6606
6607
6608
6609
6610
6611
6612
6613
6614
6615
6616
6617
6618
6619
6620
6621
6622
6623
6624
6625
6626
6627
6628
6629
6630
6631
6632
6633
6634
6635
6636
6637
6638
6639
6640
6641
6642
6643
6644
6645
6646
6647
6648
6649
6650
6651
6652
6653
6654
6655
6656
6657
6658
6659
6660
6661
6662
6663
6664
6665
6666
6667
6668
6669
6670
6671
6672
6673
6674
6675
6676
6677
6678
6679
6680
6681
6682
6683
6684
6685
6686
6687
6688
6689
6690
6691
6692
6693
6694
6695
6696
6697
6698
6699
6700
6701
6702
6703
6704
6705
6706
6707
6708
6709
6710
6711
6712
6713
6714
6715
6716
6717
6718
6719
6720
6721
6722
6723
6724
6725
6726
6727
6728
6729
6730
6731
6732
6733
6734
6735
6736
6737
6738
6739
6740
6741
6742
6743
6744
6745
6746
6747
6748
6749
6750
6751
6752
6753
6754
6755
6756
6757
6758
6759
6760
6761
6762
6763
6764
6765
6766
6767
6768
6769
6770
6771
6772
6773
6774
6775
6776
6777
6778
6779
6780
6781
6782
6783
6784
6785
6786
6787
6788
6789
6790
6791
6792
6793
6794
6795
6796
6797
6798
6799
6800
6801
6802
6803
6804
6805
6806
6807
6808
6809
6810
6811
6812
6813
6814
6815
6816
6817
6818
6819
6820
6821
6822
6823
6824
6825
6826
6827
6828
6829
6830
6831
6832
6833
6834
6835
6836
6837
6838
6839
6840
6841
6842
6843
6844
6845
6846
6847
6848
6849
6850
6851
6852
6853
6854
6855
6856
6857
6858
6859
6860
6861
6862
6863
6864
6865
6866
6867
6868
6869
6870
6871
6872
6873
6874
6875
6876
6877
6878
6879
6880
6881
6882
6883
6884
6885
6886
6887
6888
6889
6890
6891
6892
6893
6894
6895
6896
6897
6898
6899
6900
6901
6902
6903
6904
6905
6906
6907
6908
6909
6910
6911
6912
6913
6914
6915
6916
6917
6918
6919
6920
6921
6922
6923
6924
6925
6926
6927
6928
6929
6930
6931
6932
6933
6934
6935
6936
6937
6938
6939
6940
6941
6942
6943
6944
6945
6946
6947
6948
6949
6950
6951
6952
6953
6954
6955
6956
6957
6958
6959
6960
6961
6962
6963
6964
6965
6966
6967
6968
6969
6970
6971
6972
6973
6974
6975
6976
6977
6978
6979
6980
6981
6982
6983
6984
6985
6986
6987
6988
6989
6990
6991
6992
6993
6994
6995
6996
6997
6998
6999
7000
7001
7002
7003
7004
7005
7006
7007
7008
7009
7010
7011
7012
7013
7014
7015
7016
7017
7018
7019
7020
7021
7022
7023
7024
7025
7026
7027
7028
7029
7030
7031
7032
7033
7034
7035
7036
7037
7038
7039
7040
7041
7042
7043
7044
7045
7046
7047
7048
7049
7050
7051
7052
7053
7054
7055
7056
7057
7058
7059
7060
7061
7062
7063
7064
7065
7066
7067
7068
7069
7070
7071
7072
7073
7074
7075
7076
7077
7078
7079
7080
7081
7082
7083
7084
7085
7086
7087
7088
7089
7090
7091
7092
7093
7094
7095
7096
7097
7098
7099
7100
7101
7102
7103
7104
7105
7106
7107
7108
7109
7110
7111
7112
7113
7114
7115
7116
7117
7118
7119
7120
7121
7122
7123
7124
7125
7126
7127
7128
7129
7130
7131
7132
7133
7134
7135
7136
7137
7138
7139
7140
7141
7142
7143
7144
7145
7146
7147
7148
7149
7150
7151
7152
7153
7154
7155
7156
7157
7158
7159
7160
7161
7162
7163
7164
7165
7166
7167
7168
7169
7170
7171
7172
7173
7174
7175
7176
7177
7178
7179
7180
7181
7182
7183
7184
7185
7186
7187
7188
7189
7190
7191
7192
7193
7194
7195
7196
7197
7198
7199
7200
7201
7202
7203
7204
7205
7206
7207
7208
7209
7210
7211
7212
7213
7214
7215
7216
7217
7218
7219
7220
7221
7222
7223
7224
7225
7226
7227
7228
7229
7230
7231
7232
7233
7234
7235
7236
7237
7238
7239
7240
7241
7242
7243
7244
7245
7246
7247
7248
7249
7250
7251
7252
7253
7254
7255
7256
7257
7258
7259
7260
7261
7262
7263
7264
7265
7266
7267
7268
7269
7270
7271
7272
7273
7274
7275
7276
7277
7278
7279
7280
7281
7282
7283
7284
7285
7286
7287
7288
7289
7290
7291
7292
7293
7294
7295
7296
7297
7298
7299
7300
7301
7302
7303
7304
7305
7306
7307
7308
7309
7310
7311
7312
7313
7314
7315
7316
7317
7318
7319
7320
7321
7322
7323
7324
7325
7326
7327
7328
7329
7330
7331
7332
7333
7334
7335
7336
7337
7338
7339
7340
7341
7342
7343
7344
7345
7346
7347
7348
7349
7350
7351
7352
7353
7354
7355
7356
7357
7358
7359
7360
7361
7362
7363
7364
7365
7366
7367
7368
7369
7370
7371
7372
7373
7374
7375
7376
7377
7378
7379
7380
7381
7382
7383
7384
7385
7386
7387
7388
7389
7390
7391
7392
7393
7394
7395
7396
7397
7398
7399
7400
7401
7402
7403
7404
7405
7406
7407
7408
7409
7410
7411
7412
7413
7414
7415
7416
7417
7418
7419
7420
7421
7422
7423
7424
7425
7426
7427
7428
7429
7430
7431
7432
7433
7434
7435
7436
7437
7438
7439
7440
7441
7442
7443
7444
7445
7446
7447
7448
7449
7450
7451
7452
7453
7454
7455
7456
7457
7458
7459
7460
7461
7462
7463
7464
7465
7466
7467
7468
7469
7470
7471
7472
7473
7474
7475
7476
7477
7478
7479
7480
7481
7482
7483
7484
7485
7486
7487
7488
7489
7490
7491
7492
7493
7494
7495
7496
7497
7498
7499
7500
7501
7502
7503
7504
7505
7506
7507
7508
7509
7510
7511
7512
7513
7514
7515
7516
7517
7518
7519
7520
7521
7522
7523
7524
7525
7526
7527
7528
7529
7530
7531
7532
7533
7534
7535
7536
7537
7538
7539
7540
7541
7542
7543
7544
7545
7546
7547
7548
7549
7550
7551
7552
7553
7554
7555
7556
7557
7558
7559
7560
7561
7562
7563
7564
7565
7566
7567
7568
7569
7570
7571
7572
7573
7574
7575
7576
7577
7578
7579
7580
7581
7582
7583
7584
7585
7586
7587
7588
7589
7590
7591
7592
7593
7594
7595
7596
7597
7598
7599
7600
7601
7602
7603
7604
7605
7606
7607
7608
7609
7610
7611
7612
7613
7614
7615
7616
7617
7618
7619
7620
7621
7622
7623
7624
7625
7626
7627
7628
7629
7630
7631
7632
7633
7634
7635
7636
7637
7638
7639
7640
7641
7642
7643
7644
7645
7646
7647
7648
7649
7650
7651
7652
7653
7654
7655
7656
7657
7658
7659
7660
7661
7662
7663
7664
7665
7666
7667
7668
7669
7670
7671
7672
7673
7674
7675
7676
7677
7678
7679
7680
7681
7682
7683
7684
7685
7686
7687
7688
7689
7690
7691
7692
7693
7694
7695
7696
7697
7698
7699
7700
7701
7702
7703
7704
7705
7706
7707
7708
7709
7710
7711
7712
7713
7714
7715
7716
7717
7718
7719
7720
7721
7722
7723
7724
7725
7726
7727
7728
7729
7730
7731
7732
7733
7734
7735
7736
7737
7738
7739
7740
7741
7742
7743
7744
7745
7746
7747
7748
7749
7750
7751
7752
7753
7754
7755
7756
7757
7758
7759
7760
7761
7762
7763
7764
7765
7766
7767
7768
7769
7770
7771
7772
7773
7774
7775
7776
7777
7778
7779
7780
7781
7782
7783
7784
7785
7786
7787
7788
7789
7790
7791
7792
7793
7794
7795
7796
7797
7798
7799
7800
7801
7802
7803
7804
7805
7806
7807
7808
7809
7810
7811
7812
7813
7814
7815
7816
7817
7818
7819
7820
7821
7822
7823
7824
7825
7826
7827
7828
7829
7830
7831
7832
7833
7834
7835
7836
7837
7838
7839
7840
7841
7842
7843
7844
7845
7846
7847
7848
7849
7850
7851
7852
7853
7854
7855
7856
7857
7858
7859
7860
7861
7862
7863
7864
7865
7866
7867
7868
7869
7870
7871
7872
7873
7874
7875
7876
7877
7878
7879
7880
7881
7882
7883
7884
7885
7886
7887
7888
7889
7890
7891
7892
7893
7894
7895
7896
7897
7898
7899
7900
7901
7902
7903
7904
7905
7906
7907
7908
7909
7910
7911
7912
7913
7914
7915
7916
7917
7918
7919
7920
7921
7922
7923
7924
7925
7926
7927
7928
7929
7930
7931
7932
7933
7934
7935
7936
7937
7938
7939
7940
7941
7942
7943
7944
7945
7946
7947
7948
7949
7950
7951
7952
7953
7954
7955
7956
7957
7958
7959
7960
7961
7962
7963
7964
7965
7966
7967
7968
7969
7970
7971
7972
7973
7974
7975
7976
7977
7978
7979
7980
7981
7982
7983
7984
7985
7986
7987
7988
7989
7990
7991
7992
7993
7994
7995
7996
7997
7998
7999
8000
8001
8002
8003
8004
8005
8006
8007
8008
8009
8010
8011
8012
8013
8014
8015
8016
8017
8018
8019
8020
8021
8022
8023
8024
8025
8026
8027
8028
8029
8030
8031
8032
8033
8034
8035
8036
8037
8038
8039
8040
8041
8042
8043
8044
8045
8046
8047
8048
8049
8050
8051
8052
8053
8054
8055
8056
8057
8058
8059
8060
8061
8062
8063
8064
8065
8066
8067
8068
8069
8070
8071
8072
8073
8074
8075
8076
8077
8078
8079
8080
8081
8082
8083
8084
8085
8086
8087
8088
8089
8090
8091
8092
8093
8094
8095
8096
8097
8098
8099
8100
8101
8102
8103
8104
8105
8106
8107
8108
8109
8110
8111
8112
8113
8114
8115
8116
8117
8118
8119
8120
8121
8122
8123
8124
8125
8126
8127
8128
8129
8130
8131
8132
8133
8134
8135
8136
8137
8138
8139
8140
8141
8142
8143
8144
8145
8146
8147
8148
8149
8150
8151
8152
8153
8154
8155
8156
8157
8158
8159
8160
8161
8162
8163
8164
8165
8166
8167
8168
8169
8170
8171
8172
8173
8174
8175
8176
8177
8178
8179
8180
8181
8182
8183
8184
8185
8186
8187
8188
8189
8190
8191
8192
8193
8194
8195
8196
8197
8198
8199
8200
8201
8202
8203
8204
8205
8206
8207
8208
8209
8210
8211
8212
8213
8214
8215
8216
8217
8218
8219
8220
8221
8222
8223
8224
8225
8226
8227
8228
8229
8230
8231
8232
8233
8234
8235
8236
8237
8238
8239
8240
8241
8242
8243
8244
8245
8246
8247
8248
8249
8250
8251
8252
8253
8254
8255
8256
8257
8258
8259
8260
8261
8262
8263
8264
8265
8266
8267
8268
8269
8270
8271
8272
8273
8274
8275
8276
8277
8278
8279
8280
8281
8282
8283
8284
8285
8286
8287
8288
8289
8290
8291
8292
8293
8294
8295
8296
8297
8298
8299
8300
8301
8302
8303
8304
8305
8306
8307
8308
8309
8310
8311
8312
8313
8314
8315
8316
8317
8318
8319
8320
8321
8322
8323
8324
8325
8326
8327
8328
8329
8330
8331
8332
8333
8334
8335
8336
8337
8338
8339
8340
8341
8342
8343
8344
8345
8346
8347
8348
8349
8350
8351
8352
8353
8354
8355
8356
8357
8358
8359
8360
8361
8362
8363
8364
8365
8366
8367
8368
8369
8370
8371
8372
8373
8374
8375
8376
8377
8378
8379
8380
8381
8382
8383
8384
8385
8386
8387
8388
8389
8390
8391
8392
8393
8394
8395
8396
8397
8398
8399
8400
8401
8402
8403
8404
8405
8406
8407
8408
8409
8410
8411
8412
8413
8414
8415
8416
8417
8418
8419
8420
8421
8422
8423
8424
8425
8426
8427
8428
8429
8430
8431
8432
8433
8434
8435
8436
8437
8438
8439
8440
8441
8442
8443
8444
8445
8446
8447
8448
8449
8450
8451
8452
8453
8454
8455
8456
8457
8458
8459
8460
8461
8462
8463
8464
8465
8466
8467
8468
8469
8470
8471
8472
8473
8474
8475
8476
8477
8478
8479
8480
8481
8482
8483
8484
8485
8486
8487
8488
8489
8490
8491
8492
8493
8494
8495
8496
8497
8498
8499
8500
8501
8502
8503
8504
8505
8506
8507
8508
8509
8510
8511
8512
8513
8514
8515
8516
8517
8518
8519
8520
8521
8522
8523
8524
8525
8526
8527
8528
8529
8530
8531
8532
8533
8534
8535
8536
8537
8538
8539
8540
8541
8542
8543
8544
8545
8546
8547
8548
8549
8550
8551
8552
8553
8554
8555
8556
8557
8558
8559
8560
8561
8562
8563
8564
8565
8566
8567
8568
8569
8570
8571
8572
8573
8574
8575
8576
8577
8578
8579
8580
8581
8582
8583
8584
8585
8586
8587
8588
8589
8590
8591
8592
8593
8594
8595
8596
8597
8598
8599
8600
8601
8602
8603
8604
8605
8606
8607
8608
8609
8610
8611
8612
8613
8614
8615
8616
8617
8618
8619
8620
8621
8622
8623
8624
8625
8626
8627
8628
8629
8630
8631
8632
8633
8634
8635
8636
8637
8638
8639
8640
8641
8642
8643
8644
8645
8646
8647
8648
8649
8650
8651
8652
8653
8654
8655
8656
8657
8658
8659
8660
8661
8662
8663
8664
8665
8666
8667
8668
8669
8670
8671
8672
8673
8674
8675
8676
8677
8678
8679
8680
8681
8682
8683
8684
8685
8686
8687
8688
8689
8690
8691
8692
8693
8694
8695
8696
8697
8698
8699
8700
8701
8702
8703
8704
8705
8706
8707
8708
8709
8710
8711
8712
8713
8714
8715
8716
8717
8718
8719
8720
8721
8722
8723
8724
8725
8726
8727
8728
8729
8730
8731
8732
8733
8734
8735
8736
8737
8738
8739
8740
8741
8742
8743
8744
8745
8746
8747
8748
8749
8750
8751
8752
8753
8754
8755
8756
8757
8758
8759
8760
8761
8762
8763
8764
8765
8766
8767
8768
8769
8770
8771
8772
8773
8774
8775
8776
8777
8778
8779
8780
8781
8782
8783
8784
8785
8786
8787
8788
8789
8790
8791
8792
8793
8794
8795
8796
8797
8798
8799
8800
8801
8802
8803
8804
8805
8806
8807
8808
8809
8810
8811
8812
8813
8814
8815
8816
8817
8818
8819
8820
8821
8822
8823
8824
8825
8826
8827
8828
8829
8830
8831
8832
8833
8834
8835
8836
8837
8838
8839
8840
8841
8842
8843
8844
8845
8846
8847
8848
8849
8850
8851
8852
8853
8854
8855
8856
8857
8858
8859
8860
8861
8862
8863
8864
8865
8866
8867
8868
8869
8870
8871
8872
8873
8874
8875
8876
8877
8878
8879
8880
8881
8882
8883
8884
8885
8886
8887
8888
8889
8890
8891
8892
8893
8894
8895
8896
8897
8898
8899
8900
8901
8902
8903
8904
8905
8906
8907
8908
8909
8910
8911
8912
8913
8914
8915
8916
8917
8918
8919
8920
8921
8922
8923
8924
8925
8926
8927
8928
8929
8930
8931
8932
8933
8934
8935
8936
8937
8938
8939
8940
8941
8942
8943
8944
8945
8946
8947
8948
8949
8950
8951
8952
8953
8954
8955
8956
8957
8958
8959
8960
8961
8962
8963
8964
8965
8966
8967
8968
8969
8970
8971
8972
8973
8974
8975
8976
8977
8978
8979
8980
8981
8982
8983
8984
8985
8986
8987
8988
8989
8990
8991
8992
8993
8994
8995
8996
8997
8998
8999
9000
9001
9002
9003
9004
9005
9006
9007
9008
9009
9010
9011
9012
9013
9014
9015
9016
9017
9018
9019
9020
9021
9022
9023
9024
9025
9026
9027
9028
9029
9030
9031
9032
9033
9034
9035
9036
9037
9038
9039
9040
9041
9042
9043
9044
9045
9046
9047
9048
9049
9050
9051
9052
9053
9054
9055
9056
9057
9058
9059
9060
9061
9062
9063
9064
9065
9066
9067
9068
9069
9070
9071
9072
9073
9074
9075
9076
9077
9078
9079
9080
9081
9082
9083
9084
9085
9086
9087
9088
9089
9090
9091
9092
9093
9094
9095
9096
9097
9098
9099
9100
9101
9102
9103
9104
9105
9106
9107
9108
9109
9110
9111
9112
9113
9114
9115
9116
9117
9118
9119
9120
9121
9122
9123
9124
9125
9126
9127
9128
9129
9130
9131
9132
9133
9134
9135
9136
9137
9138
9139
9140
9141
9142
9143
9144
9145
9146
9147
9148
9149
9150
9151
9152
9153
9154
9155
9156
9157
9158
9159
9160
9161
9162
9163
9164
9165
9166
9167
9168
9169
9170
9171
9172
9173
9174
9175
9176
9177
9178
9179
9180
9181
9182
9183
9184
9185
9186
9187
9188
9189
9190
9191
9192
9193
9194
9195
9196
9197
9198
9199
9200
9201
9202
9203
9204
9205
9206
9207
9208
9209
9210
9211
9212
9213
9214
9215
9216
9217
9218
9219
9220
9221
9222
9223
9224
9225
9226
9227
9228
9229
9230
9231
9232
9233
9234
9235
9236
9237
9238
9239
9240
9241
9242
9243
9244
9245
9246
9247
9248
9249
9250
9251
9252
9253
9254
9255
9256
9257
9258
9259
9260
9261
9262
9263
9264
9265
9266
9267
9268
9269
9270
9271
9272
9273
9274
9275
9276
9277
9278
9279
9280
9281
9282
9283
9284
9285
9286
9287
9288
9289
9290
9291
9292
9293
9294
9295
9296
9297
9298
9299
9300
9301
9302
9303
9304
9305
9306
9307
9308
9309
9310
9311
9312
9313
9314
9315
9316
9317
9318
9319
9320
9321
9322
9323
9324
9325
9326
9327
9328
9329
9330
9331
9332
9333
9334
9335
9336
9337
9338
9339
9340
9341
9342
9343
9344
9345
9346
9347
9348
9349
9350
9351
9352
9353
9354
9355
9356
9357
9358
9359
9360
9361
9362
9363
9364
9365
9366
9367
9368
9369
9370
9371
9372
9373
9374
9375
9376
9377
9378
9379
9380
9381
9382
9383
9384
9385
9386
9387
9388
9389
9390
9391
9392
9393
9394
9395
9396
9397
9398
9399
9400
9401
9402
9403
9404
9405
9406
9407
9408
9409
9410
9411
9412
9413
9414
9415
9416
9417
9418
9419
9420
9421
9422
9423
9424
9425
9426
9427
9428
9429
9430
9431
9432
9433
9434
9435
9436
9437
9438
9439
9440
9441
9442
9443
9444
9445
9446
9447
9448
9449
9450
9451
9452
9453
9454
9455
9456
9457
9458
9459
9460
9461
9462
9463
9464
9465
9466
9467
9468
9469
9470
9471
9472
9473
9474
9475
9476
9477
9478
9479
9480
9481
9482
9483
9484
9485
9486
9487
9488
9489
9490
9491
9492
9493
9494
9495
9496
9497
9498
9499
9500
9501
9502
9503
9504
9505
9506
9507
9508
9509
9510
9511
9512
9513
9514
9515
9516
9517
9518
9519
9520
9521
9522
9523
9524
9525
9526
9527
9528
9529
9530
9531
9532
9533
9534
9535
9536
9537
9538
9539
9540
9541
9542
9543
9544
9545
9546
9547
9548
9549
9550
9551
9552
9553
9554
9555
9556
9557
9558
9559
9560
9561
9562
9563
9564
9565
9566
9567
9568
9569
9570
9571
9572
9573
9574
9575
9576
9577
9578
9579
9580
9581
9582
9583
9584
9585
9586
9587
9588
9589
9590
9591
9592
9593
9594
9595
9596
9597
9598
9599
9600
9601
9602
9603
9604
9605
9606
9607
9608
9609
9610
9611
9612
9613
9614
9615
9616
9617
9618
9619
9620
9621
9622
9623
9624
9625
9626
9627
9628
9629
9630
9631
9632
9633
9634
9635
9636
9637
9638
9639
9640
9641
9642
9643
9644
9645
9646
9647
9648
9649
9650
9651
9652
9653
9654
9655
9656
9657
9658
9659
9660
9661
9662
9663
9664
9665
9666
9667
9668
9669
9670
9671
9672
9673
9674
9675
9676
9677
9678
9679
9680
9681
9682
9683
9684
9685
9686
9687
9688
9689
9690
9691
9692
9693
9694
9695
9696
9697
9698
9699
9700
9701
9702
9703
9704
9705
9706
9707
9708
9709
9710
9711
9712
9713
9714
9715
9716
9717
9718
9719
9720
9721
9722
9723
9724
9725
9726
9727
9728
9729
9730
9731
9732
9733
9734
9735
9736
9737
9738
9739
9740
9741
9742
9743
9744
9745
9746
9747
9748
9749
9750
9751
9752
9753
9754
9755
9756
9757
9758
9759
9760
9761
9762
9763
9764
9765
9766
9767
9768
9769
9770
9771
9772
9773
9774
9775
9776
9777
9778
9779
9780
9781
9782
9783
9784
9785
9786
9787
9788
9789
9790
9791
9792
9793
9794
9795
9796
9797
9798
9799
9800
9801
9802
9803
9804
9805
9806
9807
9808
9809
9810
9811
9812
9813
9814
9815
9816
9817
9818
9819
9820
9821
9822
9823
9824
9825
9826
9827
9828
9829
9830
9831
9832
9833
9834
9835
9836
9837
9838
9839
9840
9841
9842
9843
9844
9845
9846
9847
9848
9849
9850
9851
9852
9853
9854
9855
9856
9857
9858
9859
9860
9861
9862
9863
9864
9865
9866
9867
9868
9869
9870
9871
9872
9873
9874
9875
9876
9877
9878
9879
9880
9881
9882
9883
9884
9885
9886
9887
9888
9889
9890
9891
9892
9893
9894
9895
9896
9897
9898
9899
9900
9901
9902
9903
9904
9905
9906
9907
9908
9909
9910
9911
9912
9913
9914
9915
9916
9917
9918
9919
9920
9921
9922
9923
9924
9925
9926
9927
9928
9929
9930
9931
9932
9933
9934
9935
9936
9937
9938
9939
9940
9941
9942
9943
9944
9945
9946
9947
9948
9949
9950
9951
9952
9953
9954
9955
9956
9957
9958
9959
9960
9961
9962
9963
9964
9965
9966
9967
9968
9969
9970
9971
9972
9973
9974
9975
9976
9977
9978
9979
9980
9981
9982
9983
9984
9985
9986
9987
9988
9989
9990
9991
9992
9993
9994
9995
9996
9997
9998
9999
10000
10001
10002
10003
10004
10005
10006
10007
10008
10009
10010
10011
10012
10013
10014
10015
10016
10017
10018
10019
10020
10021
10022
10023
10024
10025
10026
10027
10028
10029
10030
10031
10032
10033
10034
10035
10036
10037
10038
10039
10040
10041
10042
10043
10044
10045
10046
10047
10048
10049
10050
10051
10052
10053
10054
10055
10056
10057
10058
10059
10060
10061
10062
10063
10064
10065
10066
10067
10068
10069
10070
10071
10072
10073
10074
% texinfo.tex -- TeX macros to handle Texinfo files.
% 
% Load plain if necessary, i.e., if running under initex.
\expandafter\ifx\csname fmtname\endcsname\relax\input plain\fi
%
\def\texinfoversion{2012-11-08.11}
%
% Copyright 1985, 1986, 1988, 1990, 1991, 1992, 1993, 1994, 1995,
% 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006,
% 2007, 2008, 2009, 2010, 2011, 2012 Free Software Foundation, Inc.
%
% This texinfo.tex file is free software: you can redistribute it and/or
% modify it under the terms of the GNU General Public License as
% published by the Free Software Foundation, either version 3 of the
% License, or (at your option) any later version.
%
% This texinfo.tex file is distributed in the hope that it will be
% useful, but WITHOUT ANY WARRANTY; without even the implied warranty
% of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
% General Public License for more details.
%
% You should have received a copy of the GNU General Public License
% along with this program.  If not, see <http://www.gnu.org/licenses/>.
%
% As a special exception, when this file is read by TeX when processing
% a Texinfo source document, you may use the result without
% restriction.  (This has been our intent since Texinfo was invented.)
%
% Please try the latest version of texinfo.tex before submitting bug
% reports; you can get the latest version from:
%   http://ftp.gnu.org/gnu/texinfo/ (the Texinfo release area), or
%   http://ftpmirror.gnu.org/texinfo/ (same, via a mirror), or
%   http://www.gnu.org/software/texinfo/ (the Texinfo home page)
% The texinfo.tex in any given distribution could well be out
% of date, so if that's what you're using, please check.
%
% Send bug reports to bug-texinfo@gnu.org.  Please include including a
% complete document in each bug report with which we can reproduce the
% problem.  Patches are, of course, greatly appreciated.
%
% To process a Texinfo manual with TeX, it's most reliable to use the
% texi2dvi shell script that comes with the distribution.  For a simple
% manual foo.texi, however, you can get away with this:
%   tex foo.texi
%   texindex foo.??
%   tex foo.texi
%   tex foo.texi
%   dvips foo.dvi -o  # or whatever; this makes foo.ps.
% The extra TeX runs get the cross-reference information correct.
% Sometimes one run after texindex suffices, and sometimes you need more
% than two; texi2dvi does it as many times as necessary.
%
% It is possible to adapt texinfo.tex for other languages, to some
% extent.  You can get the existing language-specific files from the
% full Texinfo distribution.
%
% The GNU Texinfo home page is http://www.gnu.org/software/texinfo.


\message{Loading texinfo [version \texinfoversion]:}

% If in a .fmt file, print the version number
% and turn on active characters that we couldn't do earlier because
% they might have appeared in the input file name.
\everyjob{\message{[Texinfo version \texinfoversion]}%
  \catcode`+=\active \catcode`\_=\active}

\chardef\other=12

% We never want plain's \outer definition of \+ in Texinfo.
% For @tex, we can use \tabalign.
\let\+ = \relax

% Save some plain tex macros whose names we will redefine.
\let\ptexb=\b
\let\ptexbullet=\bullet
\let\ptexc=\c
\let\ptexcomma=\,
\let\ptexdot=\.
\let\ptexdots=\dots
\let\ptexend=\end
\let\ptexequiv=\equiv
\let\ptexexclam=\!
\let\ptexfootnote=\footnote
\let\ptexgtr=>
\let\ptexhat=^
\let\ptexi=\i
\let\ptexindent=\indent
\let\ptexinsert=\insert
\let\ptexlbrace=\{
\let\ptexless=<
\let\ptexnewwrite\newwrite
\let\ptexnoindent=\noindent
\let\ptexplus=+
\let\ptexraggedright=\raggedright
\let\ptexrbrace=\}
\let\ptexslash=\/
\let\ptexstar=\*
\let\ptext=\t
\let\ptextop=\top
{\catcode`\'=\active \global\let\ptexquoteright'}% active in plain's math mode

% If this character appears in an error message or help string, it
% starts a new line in the output.
\newlinechar = `^^J

% Use TeX 3.0's \inputlineno to get the line number, for better error
% messages, but if we're using an old version of TeX, don't do anything.
%
\ifx\inputlineno\thisisundefined
  \let\linenumber = \empty % Pre-3.0.
\else
  \def\linenumber{l.\the\inputlineno:\space}
\fi

% Set up fixed words for English if not already set.
\ifx\putwordAppendix\undefined  \gdef\putwordAppendix{Appendix}\fi
\ifx\putwordChapter\undefined   \gdef\putwordChapter{Chapter}\fi
\ifx\putworderror\undefined     \gdef\putworderror{error}\fi
\ifx\putwordfile\undefined      \gdef\putwordfile{file}\fi
\ifx\putwordin\undefined        \gdef\putwordin{in}\fi
\ifx\putwordIndexIsEmpty\undefined       \gdef\putwordIndexIsEmpty{(Index is empty)}\fi
\ifx\putwordIndexNonexistent\undefined   \gdef\putwordIndexNonexistent{(Index is nonexistent)}\fi
\ifx\putwordInfo\undefined      \gdef\putwordInfo{Info}\fi
\ifx\putwordInstanceVariableof\undefined \gdef\putwordInstanceVariableof{Instance Variable of}\fi
\ifx\putwordMethodon\undefined  \gdef\putwordMethodon{Method on}\fi
\ifx\putwordNoTitle\undefined   \gdef\putwordNoTitle{No Title}\fi
\ifx\putwordof\undefined        \gdef\putwordof{of}\fi
\ifx\putwordon\undefined        \gdef\putwordon{on}\fi
\ifx\putwordpage\undefined      \gdef\putwordpage{page}\fi
\ifx\putwordsection\undefined   \gdef\putwordsection{section}\fi
\ifx\putwordSection\undefined   \gdef\putwordSection{Section}\fi
\ifx\putwordsee\undefined       \gdef\putwordsee{see}\fi
\ifx\putwordSee\undefined       \gdef\putwordSee{See}\fi
\ifx\putwordShortTOC\undefined  \gdef\putwordShortTOC{Short Contents}\fi
\ifx\putwordTOC\undefined       \gdef\putwordTOC{Table of Contents}\fi
%
\ifx\putwordMJan\undefined \gdef\putwordMJan{January}\fi
\ifx\putwordMFeb\undefined \gdef\putwordMFeb{February}\fi
\ifx\putwordMMar\undefined \gdef\putwordMMar{March}\fi
\ifx\putwordMApr\undefined \gdef\putwordMApr{April}\fi
\ifx\putwordMMay\undefined \gdef\putwordMMay{May}\fi
\ifx\putwordMJun\undefined \gdef\putwordMJun{June}\fi
\ifx\putwordMJul\undefined \gdef\putwordMJul{July}\fi
\ifx\putwordMAug\undefined \gdef\putwordMAug{August}\fi
\ifx\putwordMSep\undefined \gdef\putwordMSep{September}\fi
\ifx\putwordMOct\undefined \gdef\putwordMOct{October}\fi
\ifx\putwordMNov\undefined \gdef\putwordMNov{November}\fi
\ifx\putwordMDec\undefined \gdef\putwordMDec{December}\fi
%
\ifx\putwordDefmac\undefined    \gdef\putwordDefmac{Macro}\fi
\ifx\putwordDefspec\undefined   \gdef\putwordDefspec{Special Form}\fi
\ifx\putwordDefvar\undefined    \gdef\putwordDefvar{Variable}\fi
\ifx\putwordDefopt\undefined    \gdef\putwordDefopt{User Option}\fi
\ifx\putwordDeffunc\undefined   \gdef\putwordDeffunc{Function}\fi

% Since the category of space is not known, we have to be careful.
\chardef\spacecat = 10
\def\spaceisspace{\catcode`\ =\spacecat}

% sometimes characters are active, so we need control sequences.
\chardef\ampChar   = `\&
\chardef\colonChar = `\:
\chardef\commaChar = `\,
\chardef\dashChar  = `\-
\chardef\dotChar   = `\.
\chardef\exclamChar= `\!
\chardef\hashChar  = `\#
\chardef\lquoteChar= `\`
\chardef\questChar = `\?
\chardef\rquoteChar= `\'
\chardef\semiChar  = `\;
\chardef\slashChar = `\/
\chardef\underChar = `\_

% Ignore a token.
%
\def\gobble#1{}

% The following is used inside several \edef's.
\def\makecsname#1{\expandafter\noexpand\csname#1\endcsname}

% Hyphenation fixes.
\hyphenation{
  Flor-i-da Ghost-script Ghost-view Mac-OS Post-Script
  ap-pen-dix bit-map bit-maps
  data-base data-bases eshell fall-ing half-way long-est man-u-script
  man-u-scripts mini-buf-fer mini-buf-fers over-view par-a-digm
  par-a-digms rath-er rec-tan-gu-lar ro-bot-ics se-vere-ly set-up spa-ces
  spell-ing spell-ings
  stand-alone strong-est time-stamp time-stamps which-ever white-space
  wide-spread wrap-around
}

% Margin to add to right of even pages, to left of odd pages.
\newdimen\bindingoffset
\newdimen\normaloffset
\newdimen\pagewidth \newdimen\pageheight

% For a final copy, take out the rectangles
% that mark overfull boxes (in case you have decided
% that the text looks ok even though it passes the margin).
%
\def\finalout{\overfullrule=0pt }

% Sometimes it is convenient to have everything in the transcript file
% and nothing on the terminal.  We don't just call \tracingall here,
% since that produces some useless output on the terminal.  We also make
% some effort to order the tracing commands to reduce output in the log
% file; cf. trace.sty in LaTeX.
%
\def\gloggingall{\begingroup \globaldefs = 1 \loggingall \endgroup}%
\def\loggingall{%
  \tracingstats2
  \tracingpages1
  \tracinglostchars2  % 2 gives us more in etex
  \tracingparagraphs1
  \tracingoutput1
  \tracingmacros2
  \tracingrestores1
  \showboxbreadth\maxdimen \showboxdepth\maxdimen
  \ifx\eTeXversion\thisisundefined\else % etex gives us more logging
    \tracingscantokens1
    \tracingifs1
    \tracinggroups1
    \tracingnesting2
    \tracingassigns1
  \fi
  \tracingcommands3  % 3 gives us more in etex
  \errorcontextlines16
}%

% @errormsg{MSG}.  Do the index-like expansions on MSG, but if things
% aren't perfect, it's not the end of the world, being an error message,
% after all.
% 
\def\errormsg{\begingroup \indexnofonts \doerrormsg}
\def\doerrormsg#1{\errmessage{#1}}

% add check for \lastpenalty to plain's definitions.  If the last thing
% we did was a \nobreak, we don't want to insert more space.
%
\def\smallbreak{\ifnum\lastpenalty<10000\par\ifdim\lastskip<\smallskipamount
  \removelastskip\penalty-50\smallskip\fi\fi}
\def\medbreak{\ifnum\lastpenalty<10000\par\ifdim\lastskip<\medskipamount
  \removelastskip\penalty-100\medskip\fi\fi}
\def\bigbreak{\ifnum\lastpenalty<10000\par\ifdim\lastskip<\bigskipamount
  \removelastskip\penalty-200\bigskip\fi\fi}

% Do @cropmarks to get crop marks.
%
\newif\ifcropmarks
\let\cropmarks = \cropmarkstrue
%
% Dimensions to add cropmarks at corners.
% Added by P. A. MacKay, 12 Nov. 1986
%
\newdimen\outerhsize \newdimen\outervsize % set by the paper size routines
\newdimen\cornerlong  \cornerlong=1pc
\newdimen\cornerthick \cornerthick=.3pt
\newdimen\topandbottommargin \topandbottommargin=.75in

% Output a mark which sets \thischapter, \thissection and \thiscolor.
% We dump everything together because we only have one kind of mark.
% This works because we only use \botmark / \topmark, not \firstmark.
%
% A mark contains a subexpression of the \ifcase ... \fi construct.
% \get*marks macros below extract the needed part using \ifcase.
%
% Another complication is to let the user choose whether \thischapter
% (\thissection) refers to the chapter (section) in effect at the top
% of a page, or that at the bottom of a page.  The solution is
% described on page 260 of The TeXbook.  It involves outputting two
% marks for the sectioning macros, one before the section break, and
% one after.  I won't pretend I can describe this better than DEK...
\def\domark{%
  \toks0=\expandafter{\lastchapterdefs}%
  \toks2=\expandafter{\lastsectiondefs}%
  \toks4=\expandafter{\prevchapterdefs}%
  \toks6=\expandafter{\prevsectiondefs}%
  \toks8=\expandafter{\lastcolordefs}%
  \mark{%
                   \the\toks0 \the\toks2
      \noexpand\or \the\toks4 \the\toks6
    \noexpand\else \the\toks8
  }%
}
% \topmark doesn't work for the very first chapter (after the title
% page or the contents), so we use \firstmark there -- this gets us
% the mark with the chapter defs, unless the user sneaks in, e.g.,
% @setcolor (or @url, or @link, etc.) between @contents and the very
% first @chapter.
\def\gettopheadingmarks{%
  \ifcase0\topmark\fi
  \ifx\thischapter\empty \ifcase0\firstmark\fi \fi
}
\def\getbottomheadingmarks{\ifcase1\botmark\fi}
\def\getcolormarks{\ifcase2\topmark\fi}

% Avoid "undefined control sequence" errors.
\def\lastchapterdefs{}
\def\lastsectiondefs{}
\def\prevchapterdefs{}
\def\prevsectiondefs{}
\def\lastcolordefs{}

% Main output routine.
\chardef\PAGE = 255
\output = {\onepageout{\pagecontents\PAGE}}

\newbox\headlinebox
\newbox\footlinebox

% \onepageout takes a vbox as an argument.  Note that \pagecontents
% does insertions, but you have to call it yourself.
\def\onepageout#1{%
  \ifcropmarks \hoffset=0pt \else \hoffset=\normaloffset \fi
  %
  \ifodd\pageno  \advance\hoffset by \bindingoffset
  \else \advance\hoffset by -\bindingoffset\fi
  %
  % Do this outside of the \shipout so @code etc. will be expanded in
  % the headline as they should be, not taken literally (outputting ''code).
  \ifodd\pageno \getoddheadingmarks \else \getevenheadingmarks \fi
  \setbox\headlinebox = \vbox{\let\hsize=\pagewidth \makeheadline}%
  \ifodd\pageno \getoddfootingmarks \else \getevenfootingmarks \fi
  \setbox\footlinebox = \vbox{\let\hsize=\pagewidth \makefootline}%
  %
  {%
    % Have to do this stuff outside the \shipout because we want it to
    % take effect in \write's, yet the group defined by the \vbox ends
    % before the \shipout runs.
    %
    \indexdummies         % don't expand commands in the output.
    \normalturnoffactive  % \ in index entries must not stay \, e.g., if
               % the page break happens to be in the middle of an example.
               % We don't want .vr (or whatever) entries like this:
               % \entry{{\tt \indexbackslash }acronym}{32}{\code {\acronym}}
               % "\acronym" won't work when it's read back in;
               % it needs to be
               % {\code {{\tt \backslashcurfont }acronym}
    \shipout\vbox{%
      % Do this early so pdf references go to the beginning of the page.
      \ifpdfmakepagedest \pdfdest name{\the\pageno} xyz\fi
      %
      \ifcropmarks \vbox to \outervsize\bgroup
        \hsize = \outerhsize
        \vskip-\topandbottommargin
        \vtop to0pt{%
          \line{\ewtop\hfil\ewtop}%
          \nointerlineskip
          \line{%
            \vbox{\moveleft\cornerthick\nstop}%
            \hfill
            \vbox{\moveright\cornerthick\nstop}%
          }%
          \vss}%
        \vskip\topandbottommargin
        \line\bgroup
          \hfil % center the page within the outer (page) hsize.
          \ifodd\pageno\hskip\bindingoffset\fi
          \vbox\bgroup
      \fi
      %
      \unvbox\headlinebox
      \pagebody{#1}%
      \ifdim\ht\footlinebox > 0pt
        % Only leave this space if the footline is nonempty.
        % (We lessened \vsize for it in \oddfootingyyy.)
        % The \baselineskip=24pt in plain's \makefootline has no effect.
        \vskip 24pt
        \unvbox\footlinebox
      \fi
      %
      \ifcropmarks
          \egroup % end of \vbox\bgroup
        \hfil\egroup % end of (centering) \line\bgroup
        \vskip\topandbottommargin plus1fill minus1fill
        \boxmaxdepth = \cornerthick
        \vbox to0pt{\vss
          \line{%
            \vbox{\moveleft\cornerthick\nsbot}%
            \hfill
            \vbox{\moveright\cornerthick\nsbot}%
          }%
          \nointerlineskip
          \line{\ewbot\hfil\ewbot}%
        }%
      \egroup % \vbox from first cropmarks clause
      \fi
    }% end of \shipout\vbox
  }% end of group with \indexdummies
  \advancepageno
  \ifnum\outputpenalty>-20000 \else\dosupereject\fi
}

\newinsert\margin \dimen\margin=\maxdimen

\def\pagebody#1{\vbox to\pageheight{\boxmaxdepth=\maxdepth #1}}
{\catcode`\@ =11
\gdef\pagecontents#1{\ifvoid\topins\else\unvbox\topins\fi
% marginal hacks, juha@viisa.uucp (Juha Takala)
\ifvoid\margin\else % marginal info is present
  \rlap{\kern\hsize\vbox to\z@{\kern1pt\box\margin \vss}}\fi
\dimen@=\dp#1\relax \unvbox#1\relax
\ifvoid\footins\else\vskip\skip\footins\footnoterule \unvbox\footins\fi
\ifr@ggedbottom \kern-\dimen@ \vfil \fi}
}

% Here are the rules for the cropmarks.  Note that they are
% offset so that the space between them is truly \outerhsize or \outervsize
% (P. A. MacKay, 12 November, 1986)
%
\def\ewtop{\vrule height\cornerthick depth0pt width\cornerlong}
\def\nstop{\vbox
  {\hrule height\cornerthick depth\cornerlong width\cornerthick}}
\def\ewbot{\vrule height0pt depth\cornerthick width\cornerlong}
\def\nsbot{\vbox
  {\hrule height\cornerlong depth\cornerthick width\cornerthick}}

% Parse an argument, then pass it to #1.  The argument is the rest of
% the input line (except we remove a trailing comment).  #1 should be a
% macro which expects an ordinary undelimited TeX argument.
%
\def\parsearg{\parseargusing{}}
\def\parseargusing#1#2{%
  \def\argtorun{#2}%
  \begingroup
    \obeylines
    \spaceisspace
    #1%
    \parseargline\empty% Insert the \empty token, see \finishparsearg below.
}

{\obeylines %
  \gdef\parseargline#1^^M{%
    \endgroup % End of the group started in \parsearg.
    \argremovecomment #1\comment\ArgTerm%
  }%
}

% First remove any @comment, then any @c comment.
\def\argremovecomment#1\comment#2\ArgTerm{\argremovec #1\c\ArgTerm}
\def\argremovec#1\c#2\ArgTerm{\argcheckspaces#1\^^M\ArgTerm}

% Each occurrence of `\^^M' or `<space>\^^M' is replaced by a single space.
%
% \argremovec might leave us with trailing space, e.g.,
%    @end itemize  @c foo
% This space token undergoes the same procedure and is eventually removed
% by \finishparsearg.
%
\def\argcheckspaces#1\^^M{\argcheckspacesX#1\^^M \^^M}
\def\argcheckspacesX#1 \^^M{\argcheckspacesY#1\^^M}
\def\argcheckspacesY#1\^^M#2\^^M#3\ArgTerm{%
  \def\temp{#3}%
  \ifx\temp\empty
    % Do not use \next, perhaps the caller of \parsearg uses it; reuse \temp:
    \let\temp\finishparsearg
  \else
    \let\temp\argcheckspaces
  \fi
  % Put the space token in:
  \temp#1 #3\ArgTerm
}

% If a _delimited_ argument is enclosed in braces, they get stripped; so
% to get _exactly_ the rest of the line, we had to prevent such situation.
% We prepended an \empty token at the very beginning and we expand it now,
% just before passing the control to \argtorun.
% (Similarly, we have to think about #3 of \argcheckspacesY above: it is
% either the null string, or it ends with \^^M---thus there is no danger
% that a pair of braces would be stripped.
%
% But first, we have to remove the trailing space token.
%
\def\finishparsearg#1 \ArgTerm{\expandafter\argtorun\expandafter{#1}}

% \parseargdef\foo{...}
%	is roughly equivalent to
% \def\foo{\parsearg\Xfoo}
% \def\Xfoo#1{...}
%
% Actually, I use \csname\string\foo\endcsname, ie. \\foo, as it is my
% favourite TeX trick.  --kasal, 16nov03

\def\parseargdef#1{%
  \expandafter \doparseargdef \csname\string#1\endcsname #1%
}
\def\doparseargdef#1#2{%
  \def#2{\parsearg#1}%
  \def#1##1%
}

% Several utility definitions with active space:
{
  \obeyspaces
  \gdef\obeyedspace{ }

  % Make each space character in the input produce a normal interword
  % space in the output.  Don't allow a line break at this space, as this
  % is used only in environments like @example, where each line of input
  % should produce a line of output anyway.
  %
  \gdef\sepspaces{\obeyspaces\let =\tie}

  % If an index command is used in an @example environment, any spaces
  % therein should become regular spaces in the raw index file, not the
  % expansion of \tie (\leavevmode \penalty \@M \ ).
  \gdef\unsepspaces{\let =\space}
}


\def\flushcr{\ifx\par\lisppar \def\next##1{}\else \let\next=\relax \fi \next}

% Define the framework for environments in texinfo.tex.  It's used like this:
%
%   \envdef\foo{...}
%   \def\Efoo{...}
%
% It's the responsibility of \envdef to insert \begingroup before the
% actual body; @end closes the group after calling \Efoo.  \envdef also
% defines \thisenv, so the current environment is known; @end checks
% whether the environment name matches.  The \checkenv macro can also be
% used to check whether the current environment is the one expected.
%
% Non-false conditionals (@iftex, @ifset) don't fit into this, so they
% are not treated as environments; they don't open a group.  (The
% implementation of @end takes care not to call \endgroup in this
% special case.)


% At run-time, environments start with this:
\def\startenvironment#1{\begingroup\def\thisenv{#1}}
% initialize
\let\thisenv\empty

% ... but they get defined via ``\envdef\foo{...}'':
\long\def\envdef#1#2{\def#1{\startenvironment#1#2}}
\def\envparseargdef#1#2{\parseargdef#1{\startenvironment#1#2}}

% Check whether we're in the right environment:
\def\checkenv#1{%
  \def\temp{#1}%
  \ifx\thisenv\temp
  \else
    \badenverr
  \fi
}

% Environment mismatch, #1 expected:
\def\badenverr{%
  \errhelp = \EMsimple
  \errmessage{This command can appear only \inenvironment\temp,
    not \inenvironment\thisenv}%
}
\def\inenvironment#1{%
  \ifx#1\empty
    outside of any environment%
  \else
    in environment \expandafter\string#1%
  \fi
}

% @end foo executes the definition of \Efoo.
% But first, it executes a specialized version of \checkenv
%
\parseargdef\end{%
  \if 1\csname iscond.#1\endcsname
  \else
    % The general wording of \badenverr may not be ideal.
    \expandafter\checkenv\csname#1\endcsname
    \csname E#1\endcsname
    \endgroup
  \fi
}

\newhelp\EMsimple{Press RETURN to continue.}


% Be sure we're in horizontal mode when doing a tie, since we make space
% equivalent to this in @example-like environments. Otherwise, a space
% at the beginning of a line will start with \penalty -- and
% since \penalty is valid in vertical mode, we'd end up putting the
% penalty on the vertical list instead of in the new paragraph.
{\catcode`@ = 11
 % Avoid using \@M directly, because that causes trouble
 % if the definition is written into an index file.
 \global\let\tiepenalty = \@M
 \gdef\tie{\leavevmode\penalty\tiepenalty\ }
}

% @: forces normal size whitespace following.
\def\:{\spacefactor=1000 }

% @* forces a line break.
\def\*{\unskip\hfil\break\hbox{}\ignorespaces}

% @/ allows a line break.
\let\/=\allowbreak

% @. is an end-of-sentence period.
\def\.{.\spacefactor=\endofsentencespacefactor\space}

% @! is an end-of-sentence bang.
\def\!{!\spacefactor=\endofsentencespacefactor\space}

% @? is an end-of-sentence query.
\def\?{?\spacefactor=\endofsentencespacefactor\space}

% @frenchspacing on|off  says whether to put extra space after punctuation.
%
\def\onword{on}
\def\offword{off}
%
\parseargdef\frenchspacing{%
  \def\temp{#1}%
  \ifx\temp\onword \plainfrenchspacing
  \else\ifx\temp\offword \plainnonfrenchspacing
  \else
    \errhelp = \EMsimple
    \errmessage{Unknown @frenchspacing option `\temp', must be on|off}%
  \fi\fi
}

% @w prevents a word break.  Without the \leavevmode, @w at the
% beginning of a paragraph, when TeX is still in vertical mode, would
% produce a whole line of output instead of starting the paragraph.
\def\w#1{\leavevmode\hbox{#1}}

% @group ... @end group forces ... to be all on one page, by enclosing
% it in a TeX vbox.  We use \vtop instead of \vbox to construct the box
% to keep its height that of a normal line.  According to the rules for
% \topskip (p.114 of the TeXbook), the glue inserted is
% max (\topskip - \ht (first item), 0).  If that height is large,
% therefore, no glue is inserted, and the space between the headline and
% the text is small, which looks bad.
%
% Another complication is that the group might be very large.  This can
% cause the glue on the previous page to be unduly stretched, because it
% does not have much material.  In this case, it's better to add an
% explicit \vfill so that the extra space is at the bottom.  The
% threshold for doing this is if the group is more than \vfilllimit
% percent of a page (\vfilllimit can be changed inside of @tex).
%
\newbox\groupbox
\def\vfilllimit{0.7}
%
\envdef\group{%
  \ifnum\catcode`\^^M=\active \else
    \errhelp = \groupinvalidhelp
    \errmessage{@group invalid in context where filling is enabled}%
  \fi
  \startsavinginserts
  %
  \setbox\groupbox = \vtop\bgroup
    % Do @comment since we are called inside an environment such as
    % @example, where each end-of-line in the input causes an
    % end-of-line in the output.  We don't want the end-of-line after
    % the `@group' to put extra space in the output.  Since @group
    % should appear on a line by itself (according to the Texinfo
    % manual), we don't worry about eating any user text.
    \comment
}
%
% The \vtop produces a box with normal height and large depth; thus, TeX puts
% \baselineskip glue before it, and (when the next line of text is done)
% \lineskip glue after it.  Thus, space below is not quite equal to space
% above.  But it's pretty close.
\def\Egroup{%
    % To get correct interline space between the last line of the group
    % and the first line afterwards, we have to propagate \prevdepth.
    \endgraf % Not \par, as it may have been set to \lisppar.
    \global\dimen1 = \prevdepth
  \egroup           % End the \vtop.
  % \dimen0 is the vertical size of the group's box.
  \dimen0 = \ht\groupbox  \advance\dimen0 by \dp\groupbox
  % \dimen2 is how much space is left on the page (more or less).
  \dimen2 = \pageheight   \advance\dimen2 by -\pagetotal
  % if the group doesn't fit on the current page, and it's a big big
  % group, force a page break.
  \ifdim \dimen0 > \dimen2
    \ifdim \pagetotal < \vfilllimit\pageheight
      \page
    \fi
  \fi
  \box\groupbox
  \prevdepth = \dimen1
  \checkinserts
}
%
% TeX puts in an \escapechar (i.e., `@') at the beginning of the help
% message, so this ends up printing `@group can only ...'.
%
\newhelp\groupinvalidhelp{%
group can only be used in environments such as @example,^^J%
where each line of input produces a line of output.}

% @need space-in-mils
% forces a page break if there is not space-in-mils remaining.

\newdimen\mil  \mil=0.001in

\parseargdef\need{%
  % Ensure vertical mode, so we don't make a big box in the middle of a
  % paragraph.
  \par
  %
  % If the @need value is less than one line space, it's useless.
  \dimen0 = #1\mil
  \dimen2 = \ht\strutbox
  \advance\dimen2 by \dp\strutbox
  \ifdim\dimen0 > \dimen2
    %
    % Do a \strut just to make the height of this box be normal, so the
    % normal leading is inserted relative to the preceding line.
    % And a page break here is fine.
    \vtop to #1\mil{\strut\vfil}%
    %
    % TeX does not even consider page breaks if a penalty added to the
    % main vertical list is 10000 or more.  But in order to see if the
    % empty box we just added fits on the page, we must make it consider
    % page breaks.  On the other hand, we don't want to actually break the
    % page after the empty box.  So we use a penalty of 9999.
    %
    % There is an extremely small chance that TeX will actually break the
    % page at this \penalty, if there are no other feasible breakpoints in
    % sight.  (If the user is using lots of big @group commands, which
    % almost-but-not-quite fill up a page, TeX will have a hard time doing
    % good page breaking, for example.)  However, I could not construct an
    % example where a page broke at this \penalty; if it happens in a real
    % document, then we can reconsider our strategy.
    \penalty9999
    %
    % Back up by the size of the box, whether we did a page break or not.
    \kern -#1\mil
    %
    % Do not allow a page break right after this kern.
    \nobreak
  \fi
}

% @br   forces paragraph break (and is undocumented).

\let\br = \par

% @page forces the start of a new page.
%
\def\page{\par\vfill\supereject}

% @exdent text....
% outputs text on separate line in roman font, starting at standard page margin

% This records the amount of indent in the innermost environment.
% That's how much \exdent should take out.
\newskip\exdentamount

% This defn is used inside fill environments such as @defun.
\parseargdef\exdent{\hfil\break\hbox{\kern -\exdentamount{\rm#1}}\hfil\break}

% This defn is used inside nofill environments such as @example.
\parseargdef\nofillexdent{{\advance \leftskip by -\exdentamount
  \leftline{\hskip\leftskip{\rm#1}}}}

% @inmargin{WHICH}{TEXT} puts TEXT in the WHICH margin next to the current
% paragraph.  For more general purposes, use the \margin insertion
% class.  WHICH is `l' or `r'.  Not documented, written for gawk manual.
%
\newskip\inmarginspacing \inmarginspacing=1cm
\def\strutdepth{\dp\strutbox}
%
\def\doinmargin#1#2{\strut\vadjust{%
  \nobreak
  \kern-\strutdepth
  \vtop to \strutdepth{%
    \baselineskip=\strutdepth
    \vss
    % if you have multiple lines of stuff to put here, you'll need to
    % make the vbox yourself of the appropriate size.
    \ifx#1l%
      \llap{\ignorespaces #2\hskip\inmarginspacing}%
    \else
      \rlap{\hskip\hsize \hskip\inmarginspacing \ignorespaces #2}%
    \fi
    \null
  }%
}}
\def\inleftmargin{\doinmargin l}
\def\inrightmargin{\doinmargin r}
%
% @inmargin{TEXT [, RIGHT-TEXT]}
% (if RIGHT-TEXT is given, use TEXT for left page, RIGHT-TEXT for right;
% else use TEXT for both).
%
\def\inmargin#1{\parseinmargin #1,,\finish}
\def\parseinmargin#1,#2,#3\finish{% not perfect, but better than nothing.
  \setbox0 = \hbox{\ignorespaces #2}%
  \ifdim\wd0 > 0pt
    \def\lefttext{#1}%  have both texts
    \def\righttext{#2}%
  \else
    \def\lefttext{#1}%  have only one text
    \def\righttext{#1}%
  \fi
  %
  \ifodd\pageno
    \def\temp{\inrightmargin\righttext}% odd page -> outside is right margin
  \else
    \def\temp{\inleftmargin\lefttext}%
  \fi
  \temp
}

% @| inserts a changebar to the left of the current line.  It should
% surround any changed text.  This approach does *not* work if the
% change spans more than two lines of output.  To handle that, we would
% have adopt a much more difficult approach (putting marks into the main
% vertical list for the beginning and end of each change).  This command
% is not documented, not supported, and doesn't work.
%
\def\|{%
  % \vadjust can only be used in horizontal mode.
  \leavevmode
  %
  % Append this vertical mode material after the current line in the output.
  \vadjust{%
    % We want to insert a rule with the height and depth of the current
    % leading; that is exactly what \strutbox is supposed to record.
    \vskip-\baselineskip
    %
    % \vadjust-items are inserted at the left edge of the type.  So
    % the \llap here moves out into the left-hand margin.
    \llap{%
      %
      % For a thicker or thinner bar, change the `1pt'.
      \vrule height\baselineskip width1pt
      %
      % This is the space between the bar and the text.
      \hskip 12pt
    }%
  }%
}

% @include FILE -- \input text of FILE.
%
\def\include{\parseargusing\filenamecatcodes\includezzz}
\def\includezzz#1{%
  \pushthisfilestack
  \def\thisfile{#1}%
  {%
    \makevalueexpandable  % we want to expand any @value in FILE.
    \turnoffactive        % and allow special characters in the expansion
    \indexnofonts         % Allow `@@' and other weird things in file names.
    \wlog{texinfo.tex: doing @include of #1^^J}%
    \edef\temp{\noexpand\input #1 }%
    %
    % This trickery is to read FILE outside of a group, in case it makes
    % definitions, etc.
    \expandafter
  }\temp
  \popthisfilestack
}
\def\filenamecatcodes{%
  \catcode`\\=\other
  \catcode`~=\other
  \catcode`^=\other
  \catcode`_=\other
  \catcode`|=\other
  \catcode`<=\other
  \catcode`>=\other
  \catcode`+=\other
  \catcode`-=\other
  \catcode`\`=\other
  \catcode`\'=\other
}

\def\pushthisfilestack{%
  \expandafter\pushthisfilestackX\popthisfilestack\StackTerm
}
\def\pushthisfilestackX{%
  \expandafter\pushthisfilestackY\thisfile\StackTerm
}
\def\pushthisfilestackY #1\StackTerm #2\StackTerm {%
  \gdef\popthisfilestack{\gdef\thisfile{#1}\gdef\popthisfilestack{#2}}%
}

\def\popthisfilestack{\errthisfilestackempty}
\def\errthisfilestackempty{\errmessage{Internal error:
  the stack of filenames is empty.}}
%
\def\thisfile{}

% @center line
% outputs that line, centered.
%
\parseargdef\center{%
  \ifhmode
    \let\centersub\centerH
  \else
    \let\centersub\centerV
  \fi
  \centersub{\hfil \ignorespaces#1\unskip \hfil}%
  \let\centersub\relax % don't let the definition persist, just in case
}
\def\centerH#1{{%
  \hfil\break
  \advance\hsize by -\leftskip
  \advance\hsize by -\rightskip
  \line{#1}%
  \break
}}
%
\newcount\centerpenalty
\def\centerV#1{%
  % The idea here is the same as in \startdefun, \cartouche, etc.: if
  % @center is the first thing after a section heading, we need to wipe
  % out the negative parskip inserted by \sectionheading, but still
  % prevent a page break here.
  \centerpenalty = \lastpenalty
  \ifnum\centerpenalty>10000 \vskip\parskip \fi
  \ifnum\centerpenalty>9999 \penalty\centerpenalty \fi
  \line{\kern\leftskip #1\kern\rightskip}%
}

% @sp n   outputs n lines of vertical space
%
\parseargdef\sp{\vskip #1\baselineskip}

% @comment ...line which is ignored...
% @c is the same as @comment
% @ignore ... @end ignore  is another way to write a comment
%
\def\comment{\begingroup \catcode`\^^M=\other%
\catcode`\@=\other \catcode`\{=\other \catcode`\}=\other%
\commentxxx}
{\catcode`\^^M=\other \gdef\commentxxx#1^^M{\endgroup}}
%
\let\c=\comment

% @paragraphindent NCHARS
% We'll use ems for NCHARS, close enough.
% NCHARS can also be the word `asis' or `none'.
% We cannot feasibly implement @paragraphindent asis, though.
%
\def\asisword{asis} % no translation, these are keywords
\def\noneword{none}
%
\parseargdef\paragraphindent{%
  \def\temp{#1}%
  \ifx\temp\asisword
  \else
    \ifx\temp\noneword
      \defaultparindent = 0pt
    \else
      \defaultparindent = #1em
    \fi
  \fi
  \parindent = \defaultparindent
}

% @exampleindent NCHARS
% We'll use ems for NCHARS like @paragraphindent.
% It seems @exampleindent asis isn't necessary, but
% I preserve it to make it similar to @paragraphindent.
\parseargdef\exampleindent{%
  \def\temp{#1}%
  \ifx\temp\asisword
  \else
    \ifx\temp\noneword
      \lispnarrowing = 0pt
    \else
      \lispnarrowing = #1em
    \fi
  \fi
}

% @firstparagraphindent WORD
% If WORD is `none', then suppress indentation of the first paragraph
% after a section heading.  If WORD is `insert', then do indent at such
% paragraphs.
%
% The paragraph indentation is suppressed or not by calling
% \suppressfirstparagraphindent, which the sectioning commands do.
% We switch the definition of this back and forth according to WORD.
% By default, we suppress indentation.
%
\def\suppressfirstparagraphindent{\dosuppressfirstparagraphindent}
\def\insertword{insert}
%
\parseargdef\firstparagraphindent{%
  \def\temp{#1}%
  \ifx\temp\noneword
    \let\suppressfirstparagraphindent = \dosuppressfirstparagraphindent
  \else\ifx\temp\insertword
    \let\suppressfirstparagraphindent = \relax
  \else
    \errhelp = \EMsimple
    \errmessage{Unknown @firstparagraphindent option `\temp'}%
  \fi\fi
}

% Here is how we actually suppress indentation.  Redefine \everypar to
% \kern backwards by \parindent, and then reset itself to empty.
%
% We also make \indent itself not actually do anything until the next
% paragraph.
%
\gdef\dosuppressfirstparagraphindent{%
  \gdef\indent{%
    \restorefirstparagraphindent
    \indent
  }%
  \gdef\noindent{%
    \restorefirstparagraphindent
    \noindent
  }%
  \global\everypar = {%
    \kern -\parindent
    \restorefirstparagraphindent
  }%
}

\gdef\restorefirstparagraphindent{%
  \global \let \indent = \ptexindent
  \global \let \noindent = \ptexnoindent
  \global \everypar = {}%
}


% @refill is a no-op.
\let\refill=\relax

% If working on a large document in chapters, it is convenient to
% be able to disable indexing, cross-referencing, and contents, for test runs.
% This is done with @novalidate (before @setfilename).
%
\newif\iflinks \linkstrue % by default we want the aux files.
\let\novalidate = \linksfalse

% @setfilename is done at the beginning of every texinfo file.
% So open here the files we need to have open while reading the input.
% This makes it possible to make a .fmt file for texinfo.
\def\setfilename{%
   \fixbackslash  % Turn off hack to swallow `\input texinfo'.
   \iflinks
     \tryauxfile
     % Open the new aux file.  TeX will close it automatically at exit.
     \immediate\openout\auxfile=\jobname.aux
   \fi % \openindices needs to do some work in any case.
   \openindices
   \let\setfilename=\comment % Ignore extra @setfilename cmds.
   %
   % If texinfo.cnf is present on the system, read it.
   % Useful for site-wide @afourpaper, etc.
   \openin 1 texinfo.cnf
   \ifeof 1 \else \input texinfo.cnf \fi
   \closein 1
   %
   \comment % Ignore the actual filename.
}

% Called from \setfilename.
%
\def\openindices{%
  \newindex{cp}%
  \newcodeindex{fn}%
  \newcodeindex{vr}%
  \newcodeindex{tp}%
  \newcodeindex{ky}%
  \newcodeindex{pg}%
}

% @bye.
\outer\def\bye{\pagealignmacro\tracingstats=1\ptexend}


\message{pdf,}
% adobe `portable' document format
\newcount\tempnum
\newcount\lnkcount
\newtoks\filename
\newcount\filenamelength
\newcount\pgn
\newtoks\toksA
\newtoks\toksB
\newtoks\toksC
\newtoks\toksD
\newbox\boxA
\newcount\countA
\newif\ifpdf
\newif\ifpdfmakepagedest

% when pdftex is run in dvi mode, \pdfoutput is defined (so \pdfoutput=1
% can be set).  So we test for \relax and 0 as well as being undefined.
\ifx\pdfoutput\thisisundefined
\else
  \ifx\pdfoutput\relax
  \else
    \ifcase\pdfoutput
    \else
      \pdftrue
    \fi
  \fi
\fi

% PDF uses PostScript string constants for the names of xref targets,
% for display in the outlines, and in other places.  Thus, we have to
% double any backslashes.  Otherwise, a name like "\node" will be
% interpreted as a newline (\n), followed by o, d, e.  Not good.
% 
% See http://www.ntg.nl/pipermail/ntg-pdftex/2004-July/000654.html and
% related messages.  The final outcome is that it is up to the TeX user
% to double the backslashes and otherwise make the string valid, so
% that's what we do.  pdftex 1.30.0 (ca.2005) introduced a primitive to
% do this reliably, so we use it.

% #1 is a control sequence in which to do the replacements,
% which we \xdef.
\def\txiescapepdf#1{%
  \ifx\pdfescapestring\thisisundefined
    % No primitive available; should we give a warning or log?
    % Many times it won't matter.
  \else
    % The expandable \pdfescapestring primitive escapes parentheses,
    % backslashes, and other special chars.
    \xdef#1{\pdfescapestring{#1}}%
  \fi
}

\newhelp\nopdfimagehelp{Texinfo supports .png, .jpg, .jpeg, and .pdf images
with PDF output, and none of those formats could be found.  (.eps cannot
be supported due to the design of the PDF format; use regular TeX (DVI
output) for that.)}

\ifpdf
  %
  % Color manipulation macros based on pdfcolor.tex,
  % except using rgb instead of cmyk; the latter is said to render as a
  % very dark gray on-screen and a very dark halftone in print, instead
  % of actual black.
  \def\rgbDarkRed{0.50 0.09 0.12}
  \def\rgbBlack{0 0 0}
  %
  % k sets the color for filling (usual text, etc.);
  % K sets the color for stroking (thin rules, e.g., normal _'s).
  \def\pdfsetcolor#1{\pdfliteral{#1 rg  #1 RG}}
  %
  % Set color, and create a mark which defines \thiscolor accordingly,
  % so that \makeheadline knows which color to restore.
  \def\setcolor#1{%
    \xdef\lastcolordefs{\gdef\noexpand\thiscolor{#1}}%
    \domark
    \pdfsetcolor{#1}%
  }
  %
  \def\maincolor{\rgbBlack}
  \pdfsetcolor{\maincolor}
  \edef\thiscolor{\maincolor}
  \def\lastcolordefs{}
  %
  \def\makefootline{%
    \baselineskip24pt
    \line{\pdfsetcolor{\maincolor}\the\footline}%
  }
  %
  \def\makeheadline{%
    \vbox to 0pt{%
      \vskip-22.5pt
      \line{%
        \vbox to8.5pt{}%
        % Extract \thiscolor definition from the marks.
        \getcolormarks
        % Typeset the headline with \maincolor, then restore the color.
        \pdfsetcolor{\maincolor}\the\headline\pdfsetcolor{\thiscolor}%
      }%
      \vss
    }%
    \nointerlineskip
  }
  %
  %
  \pdfcatalog{/PageMode /UseOutlines}
  %
  % #1 is image name, #2 width (might be empty/whitespace), #3 height (ditto).
  \def\dopdfimage#1#2#3{%
    \def\pdfimagewidth{#2}\setbox0 = \hbox{\ignorespaces #2}%
    \def\pdfimageheight{#3}\setbox2 = \hbox{\ignorespaces #3}%
    %
    % pdftex (and the PDF format) support .pdf, .png, .jpg (among
    % others).  Let's try in that order, PDF first since if
    % someone has a scalable image, presumably better to use that than a
    % bitmap.
    \let\pdfimgext=\empty
    \begingroup
      \openin 1 #1.pdf \ifeof 1
        \openin 1 #1.PDF \ifeof 1
          \openin 1 #1.png \ifeof 1
            \openin 1 #1.jpg \ifeof 1
              \openin 1 #1.jpeg \ifeof 1
                \openin 1 #1.JPG \ifeof 1
                  \errhelp = \nopdfimagehelp
                  \errmessage{Could not find image file #1 for pdf}%
                \else \gdef\pdfimgext{JPG}%
                \fi
              \else \gdef\pdfimgext{jpeg}%
              \fi
            \else \gdef\pdfimgext{jpg}%
            \fi
          \else \gdef\pdfimgext{png}%
          \fi
        \else \gdef\pdfimgext{PDF}%
        \fi
      \else \gdef\pdfimgext{pdf}%
      \fi
      \closein 1
    \endgroup
    %
    % without \immediate, ancient pdftex seg faults when the same image is
    % included twice.  (Version 3.14159-pre-1.0-unofficial-20010704.)
    \ifnum\pdftexversion < 14
      \immediate\pdfimage
    \else
      \immediate\pdfximage
    \fi
      \ifdim \wd0 >0pt width \pdfimagewidth \fi
      \ifdim \wd2 >0pt height \pdfimageheight \fi
      \ifnum\pdftexversion<13
         #1.\pdfimgext
       \else
         {#1.\pdfimgext}%
       \fi
    \ifnum\pdftexversion < 14 \else
      \pdfrefximage \pdflastximage
    \fi}
  %
  \def\pdfmkdest#1{{%
    % We have to set dummies so commands such as @code, and characters
    % such as \, aren't expanded when present in a section title.
    \indexnofonts
    \turnoffactive
    \makevalueexpandable
    \def\pdfdestname{#1}%
    \txiescapepdf\pdfdestname
    \safewhatsit{\pdfdest name{\pdfdestname} xyz}%
  }}
  %
  % used to mark target names; must be expandable.
  \def\pdfmkpgn#1{#1}
  %
  % by default, use a color that is dark enough to print on paper as
  % nearly black, but still distinguishable for online viewing.
  \def\urlcolor{\rgbDarkRed}
  \def\linkcolor{\rgbDarkRed}
  \def\endlink{\setcolor{\maincolor}\pdfendlink}
  %
  % Adding outlines to PDF; macros for calculating structure of outlines
  % come from Petr Olsak
  \def\expnumber#1{\expandafter\ifx\csname#1\endcsname\relax 0%
    \else \csname#1\endcsname \fi}
  \def\advancenumber#1{\tempnum=\expnumber{#1}\relax
    \advance\tempnum by 1
    \expandafter\xdef\csname#1\endcsname{\the\tempnum}}
  %
  % #1 is the section text, which is what will be displayed in the
  % outline by the pdf viewer.  #2 is the pdf expression for the number
  % of subentries (or empty, for subsubsections).  #3 is the node text,
  % which might be empty if this toc entry had no corresponding node.
  % #4 is the page number
  %
  \def\dopdfoutline#1#2#3#4{%
    % Generate a link to the node text if that exists; else, use the
    % page number.  We could generate a destination for the section
    % text in the case where a section has no node, but it doesn't
    % seem worth the trouble, since most documents are normally structured.
    \edef\pdfoutlinedest{#3}%
    \ifx\pdfoutlinedest\empty
      \def\pdfoutlinedest{#4}%
    \else
      \txiescapepdf\pdfoutlinedest
    \fi
    %
    % Also escape PDF chars in the display string.
    \edef\pdfoutlinetext{#1}%
    \txiescapepdf\pdfoutlinetext
    %
    \pdfoutline goto name{\pdfmkpgn{\pdfoutlinedest}}#2{\pdfoutlinetext}%
  }
  %
  \def\pdfmakeoutlines{%
    \begingroup
      % Read toc silently, to get counts of subentries for \pdfoutline.
      \def\partentry##1##2##3##4{}% ignore parts in the outlines
      \def\numchapentry##1##2##3##4{%
	\def\thischapnum{##2}%
	\def\thissecnum{0}%
	\def\thissubsecnum{0}%
      }%
      \def\numsecentry##1##2##3##4{%
	\advancenumber{chap\thischapnum}%
	\def\thissecnum{##2}%
	\def\thissubsecnum{0}%
      }%
      \def\numsubsecentry##1##2##3##4{%
	\advancenumber{sec\thissecnum}%
	\def\thissubsecnum{##2}%
      }%
      \def\numsubsubsecentry##1##2##3##4{%
	\advancenumber{subsec\thissubsecnum}%
      }%
      \def\thischapnum{0}%
      \def\thissecnum{0}%
      \def\thissubsecnum{0}%
      %
      % use \def rather than \let here because we redefine \chapentry et
      % al. a second time, below.
      \def\appentry{\numchapentry}%
      \def\appsecentry{\numsecentry}%
      \def\appsubsecentry{\numsubsecentry}%
      \def\appsubsubsecentry{\numsubsubsecentry}%
      \def\unnchapentry{\numchapentry}%
      \def\unnsecentry{\numsecentry}%
      \def\unnsubsecentry{\numsubsecentry}%
      \def\unnsubsubsecentry{\numsubsubsecentry}%
      \readdatafile{toc}%
      %
      % Read toc second time, this time actually producing the outlines.
      % The `-' means take the \expnumber as the absolute number of
      % subentries, which we calculated on our first read of the .toc above.
      %
      % We use the node names as the destinations.
      \def\numchapentry##1##2##3##4{%
        \dopdfoutline{##1}{count-\expnumber{chap##2}}{##3}{##4}}%
      \def\numsecentry##1##2##3##4{%
        \dopdfoutline{##1}{count-\expnumber{sec##2}}{##3}{##4}}%
      \def\numsubsecentry##1##2##3##4{%
        \dopdfoutline{##1}{count-\expnumber{subsec##2}}{##3}{##4}}%
      \def\numsubsubsecentry##1##2##3##4{% count is always zero
        \dopdfoutline{##1}{}{##3}{##4}}%
      %
      % PDF outlines are displayed using system fonts, instead of
      % document fonts.  Therefore we cannot use special characters,
      % since the encoding is unknown.  For example, the eogonek from
      % Latin 2 (0xea) gets translated to a | character.  Info from
      % Staszek Wawrykiewicz, 19 Jan 2004 04:09:24 +0100.
      %
      % TODO this right, we have to translate 8-bit characters to
      % their "best" equivalent, based on the @documentencoding.  Too
      % much work for too little return.  Just use the ASCII equivalents
      % we use for the index sort strings.
      % 
      \indexnofonts
      \setupdatafile
      % We can have normal brace characters in the PDF outlines, unlike
      % Texinfo index files.  So set that up.
      \def\{{\lbracecharliteral}%
      \def\}{\rbracecharliteral}%
      \catcode`\\=\active \otherbackslash
      \input \tocreadfilename
    \endgroup
  }
  {\catcode`[=1 \catcode`]=2
   \catcode`{=\other \catcode`}=\other
   \gdef\lbracecharliteral[{]%
   \gdef\rbracecharliteral[}]%
  ]
  %
  \def\skipspaces#1{\def\PP{#1}\def\D{|}%
    \ifx\PP\D\let\nextsp\relax
    \else\let\nextsp\skipspaces
      \addtokens{\filename}{\PP}%
      \advance\filenamelength by 1
    \fi
    \nextsp}
  \def\getfilename#1{%
    \filenamelength=0
    % If we don't expand the argument now, \skipspaces will get
    % snagged on things like "@value{foo}".
    \edef\temp{#1}%
    \expandafter\skipspaces\temp|\relax
  }
  \ifnum\pdftexversion < 14
    \let \startlink \pdfannotlink
  \else
    \let \startlink \pdfstartlink
  \fi
  % make a live url in pdf output.
  \def\pdfurl#1{%
    \begingroup
      % it seems we really need yet another set of dummies; have not
      % tried to figure out what each command should do in the context
      % of @url.  for now, just make @/ a no-op, that's the only one
      % people have actually reported a problem with.
      %
      \normalturnoffactive
      \def\@{@}%
      \let\/=\empty
      \makevalueexpandable
      % do we want to go so far as to use \indexnofonts instead of just
      % special-casing \var here?
      \def\var##1{##1}%
      %
      \leavevmode\setcolor{\urlcolor}%
      \startlink attr{/Border [0 0 0]}%
        user{/Subtype /Link /A << /S /URI /URI (#1) >>}%
    \endgroup}
  \def\pdfgettoks#1.{\setbox\boxA=\hbox{\toksA={#1.}\toksB={}\maketoks}}
  \def\addtokens#1#2{\edef\addtoks{\noexpand#1={\the#1#2}}\addtoks}
  \def\adn#1{\addtokens{\toksC}{#1}\global\countA=1\let\next=\maketoks}
  \def\poptoks#1#2|ENDTOKS|{\let\first=#1\toksD={#1}\toksA={#2}}
  \def\maketoks{%
    \expandafter\poptoks\the\toksA|ENDTOKS|\relax
    \ifx\first0\adn0
    \else\ifx\first1\adn1 \else\ifx\first2\adn2 \else\ifx\first3\adn3
    \else\ifx\first4\adn4 \else\ifx\first5\adn5 \else\ifx\first6\adn6
    \else\ifx\first7\adn7 \else\ifx\first8\adn8 \else\ifx\first9\adn9
    \else
      \ifnum0=\countA\else\makelink\fi
      \ifx\first.\let\next=\done\else
        \let\next=\maketoks
        \addtokens{\toksB}{\the\toksD}
        \ifx\first,\addtokens{\toksB}{\space}\fi
      \fi
    \fi\fi\fi\fi\fi\fi\fi\fi\fi\fi
    \next}
  \def\makelink{\addtokens{\toksB}%
    {\noexpand\pdflink{\the\toksC}}\toksC={}\global\countA=0}
  \def\pdflink#1{%
    \startlink attr{/Border [0 0 0]} goto name{\pdfmkpgn{#1}}
    \setcolor{\linkcolor}#1\endlink}
  \def\done{\edef\st{\global\noexpand\toksA={\the\toksB}}\st}
\else
  % non-pdf mode
  \let\pdfmkdest = \gobble
  \let\pdfurl = \gobble
  \let\endlink = \relax
  \let\setcolor = \gobble
  \let\pdfsetcolor = \gobble
  \let\pdfmakeoutlines = \relax
\fi  % \ifx\pdfoutput


\message{fonts,}

% Change the current font style to #1, remembering it in \curfontstyle.
% For now, we do not accumulate font styles: @b{@i{foo}} prints foo in
% italics, not bold italics.
%
\def\setfontstyle#1{%
  \def\curfontstyle{#1}% not as a control sequence, because we are \edef'd.
  \csname ten#1\endcsname  % change the current font
}

% Select #1 fonts with the current style.
%
\def\selectfonts#1{\csname #1fonts\endcsname \csname\curfontstyle\endcsname}

\def\rm{\fam=0 \setfontstyle{rm}}
\def\it{\fam=\itfam \setfontstyle{it}}
\def\sl{\fam=\slfam \setfontstyle{sl}}
\def\bf{\fam=\bffam \setfontstyle{bf}}\def\bfstylename{bf}
\def\tt{\fam=\ttfam \setfontstyle{tt}}

% Unfortunately, we have to override this for titles and the like, since
% in those cases "rm" is bold.  Sigh.
\def\rmisbold{\rm\def\curfontstyle{bf}}

% Texinfo sort of supports the sans serif font style, which plain TeX does not.
% So we set up a \sf.
\newfam\sffam
\def\sf{\fam=\sffam \setfontstyle{sf}}
\let\li = \sf % Sometimes we call it \li, not \sf.

% We don't need math for this font style.
\def\ttsl{\setfontstyle{ttsl}}


% Set the baselineskip to #1, and the lineskip and strut size
% correspondingly.  There is no deep meaning behind these magic numbers
% used as factors; they just match (closely enough) what Knuth defined.
%
\def\lineskipfactor{.08333}
\def\strutheightpercent{.70833}
\def\strutdepthpercent {.29167}
%
% can get a sort of poor man's double spacing by redefining this.
\def\baselinefactor{1}
%
\newdimen\textleading
\def\setleading#1{%
  \dimen0 = #1\relax
  \normalbaselineskip = \baselinefactor\dimen0
  \normallineskip = \lineskipfactor\normalbaselineskip
  \normalbaselines
  \setbox\strutbox =\hbox{%
    \vrule width0pt height\strutheightpercent\baselineskip
                    depth \strutdepthpercent \baselineskip
  }%
}

% PDF CMaps.  See also LaTeX's t1.cmap.
%
% do nothing with this by default.
\expandafter\let\csname cmapOT1\endcsname\gobble
\expandafter\let\csname cmapOT1IT\endcsname\gobble
\expandafter\let\csname cmapOT1TT\endcsname\gobble

% if we are producing pdf, and we have \pdffontattr, then define cmaps.
% (\pdffontattr was introduced many years ago, but people still run
% older pdftex's; it's easy to conditionalize, so we do.)
\ifpdf \ifx\pdffontattr\thisisundefined \else
  \begingroup
    \catcode`\^^M=\active \def^^M{^^J}% Output line endings as the ^^J char.
    \catcode`\%=12 \immediate\pdfobj stream {%!PS-Adobe-3.0 Resource-CMap
%%DocumentNeededResources: ProcSet (CIDInit)
%%IncludeResource: ProcSet (CIDInit)
%%BeginResource: CMap (TeX-OT1-0)
%%Title: (TeX-OT1-0 TeX OT1 0)
%%Version: 1.000
%%EndComments
/CIDInit /ProcSet findresource begin
12 dict begin
begincmap
/CIDSystemInfo
<< /Registry (TeX)
/Ordering (OT1)
/Supplement 0
>> def
/CMapName /TeX-OT1-0 def
/CMapType 2 def
1 begincodespacerange
<00> <7F>
endcodespacerange
8 beginbfrange
<00> <01> <0393>
<09> <0A> <03A8>
<23> <26> <0023>
<28> <3B> <0028>
<3F> <5B> <003F>
<5D> <5E> <005D>
<61> <7A> <0061>
<7B> <7C> <2013>
endbfrange
40 beginbfchar
<02> <0398>
<03> <039B>
<04> <039E>
<05> <03A0>
<06> <03A3>
<07> <03D2>
<08> <03A6>
<0B> <00660066>
<0C> <00660069>
<0D> <0066006C>
<0E> <006600660069>
<0F> <00660066006C>
<10> <0131>
<11> <0237>
<12> <0060>
<13> <00B4>
<14> <02C7>
<15> <02D8>
<16> <00AF>
<17> <02DA>
<18> <00B8>
<19> <00DF>
<1A> <00E6>
<1B> <0153>
<1C> <00F8>
<1D> <00C6>
<1E> <0152>
<1F> <00D8>
<21> <0021>
<22> <201D>
<27> <2019>
<3C> <00A1>
<3D> <003D>
<3E> <00BF>
<5C> <201C>
<5F> <02D9>
<60> <2018>
<7D> <02DD>
<7E> <007E>
<7F> <00A8>
endbfchar
endcmap
CMapName currentdict /CMap defineresource pop
end
end
%%EndResource
%%EOF
    }\endgroup
  \expandafter\edef\csname cmapOT1\endcsname#1{%
    \pdffontattr#1{/ToUnicode \the\pdflastobj\space 0 R}%
  }%
%
% \cmapOT1IT
  \begingroup
    \catcode`\^^M=\active \def^^M{^^J}% Output line endings as the ^^J char.
    \catcode`\%=12 \immediate\pdfobj stream {%!PS-Adobe-3.0 Resource-CMap
%%DocumentNeededResources: ProcSet (CIDInit)
%%IncludeResource: ProcSet (CIDInit)
%%BeginResource: CMap (TeX-OT1IT-0)
%%Title: (TeX-OT1IT-0 TeX OT1IT 0)
%%Version: 1.000
%%EndComments
/CIDInit /ProcSet findresource begin
12 dict begin
begincmap
/CIDSystemInfo
<< /Registry (TeX)
/Ordering (OT1IT)
/Supplement 0
>> def
/CMapName /TeX-OT1IT-0 def
/CMapType 2 def
1 begincodespacerange
<00> <7F>
endcodespacerange
8 beginbfrange
<00> <01> <0393>
<09> <0A> <03A8>
<25> <26> <0025>
<28> <3B> <0028>
<3F> <5B> <003F>
<5D> <5E> <005D>
<61> <7A> <0061>
<7B> <7C> <2013>
endbfrange
42 beginbfchar
<02> <0398>
<03> <039B>
<04> <039E>
<05> <03A0>
<06> <03A3>
<07> <03D2>
<08> <03A6>
<0B> <00660066>
<0C> <00660069>
<0D> <0066006C>
<0E> <006600660069>
<0F> <00660066006C>
<10> <0131>
<11> <0237>
<12> <0060>
<13> <00B4>
<14> <02C7>
<15> <02D8>
<16> <00AF>
<17> <02DA>
<18> <00B8>
<19> <00DF>
<1A> <00E6>
<1B> <0153>
<1C> <00F8>
<1D> <00C6>
<1E> <0152>
<1F> <00D8>
<21> <0021>
<22> <201D>
<23> <0023>
<24> <00A3>
<27> <2019>
<3C> <00A1>
<3D> <003D>
<3E> <00BF>
<5C> <201C>
<5F> <02D9>
<60> <2018>
<7D> <02DD>
<7E> <007E>
<7F> <00A8>
endbfchar
endcmap
CMapName currentdict /CMap defineresource pop
end
end
%%EndResource
%%EOF
    }\endgroup
  \expandafter\edef\csname cmapOT1IT\endcsname#1{%
    \pdffontattr#1{/ToUnicode \the\pdflastobj\space 0 R}%
  }%
%
% \cmapOT1TT
  \begingroup
    \catcode`\^^M=\active \def^^M{^^J}% Output line endings as the ^^J char.
    \catcode`\%=12 \immediate\pdfobj stream {%!PS-Adobe-3.0 Resource-CMap
%%DocumentNeededResources: ProcSet (CIDInit)
%%IncludeResource: ProcSet (CIDInit)
%%BeginResource: CMap (TeX-OT1TT-0)
%%Title: (TeX-OT1TT-0 TeX OT1TT 0)
%%Version: 1.000
%%EndComments
/CIDInit /ProcSet findresource begin
12 dict begin
begincmap
/CIDSystemInfo
<< /Registry (TeX)
/Ordering (OT1TT)
/Supplement 0
>> def
/CMapName /TeX-OT1TT-0 def
/CMapType 2 def
1 begincodespacerange
<00> <7F>
endcodespacerange
5 beginbfrange
<00> <01> <0393>
<09> <0A> <03A8>
<21> <26> <0021>
<28> <5F> <0028>
<61> <7E> <0061>
endbfrange
32 beginbfchar
<02> <0398>
<03> <039B>
<04> <039E>
<05> <03A0>
<06> <03A3>
<07> <03D2>
<08> <03A6>
<0B> <2191>
<0C> <2193>
<0D> <0027>
<0E> <00A1>
<0F> <00BF>
<10> <0131>
<11> <0237>
<12> <0060>
<13> <00B4>
<14> <02C7>
<15> <02D8>
<16> <00AF>
<17> <02DA>
<18> <00B8>
<19> <00DF>
<1A> <00E6>
<1B> <0153>
<1C> <00F8>
<1D> <00C6>
<1E> <0152>
<1F> <00D8>
<20> <2423>
<27> <2019>
<60> <2018>
<7F> <00A8>
endbfchar
endcmap
CMapName currentdict /CMap defineresource pop
end
end
%%EndResource
%%EOF
    }\endgroup
  \expandafter\edef\csname cmapOT1TT\endcsname#1{%
    \pdffontattr#1{/ToUnicode \the\pdflastobj\space 0 R}%
  }%
\fi\fi


% Set the font macro #1 to the font named \fontprefix#2.
% #3 is the font's design size, #4 is a scale factor, #5 is the CMap
% encoding (only OT1, OT1IT and OT1TT are allowed, or empty to omit).
% Example:
% #1 = \textrm
% #2 = \rmshape
% #3 = 10
% #4 = \mainmagstep
% #5 = OT1
%
\def\setfont#1#2#3#4#5{%
  \font#1=\fontprefix#2#3 scaled #4
  \csname cmap#5\endcsname#1%
}
% This is what gets called when #5 of \setfont is empty.
\let\cmap\gobble
%
% (end of cmaps)

% Use cm as the default font prefix.
% To specify the font prefix, you must define \fontprefix
% before you read in texinfo.tex.
\ifx\fontprefix\thisisundefined
\def\fontprefix{cm}
\fi
% Support font families that don't use the same naming scheme as CM.
\def\rmshape{r}
\def\rmbshape{bx}               % where the normal face is bold
\def\bfshape{b}
\def\bxshape{bx}
\def\ttshape{tt}
\def\ttbshape{tt}
\def\ttslshape{sltt}
\def\itshape{ti}
\def\itbshape{bxti}
\def\slshape{sl}
\def\slbshape{bxsl}
\def\sfshape{ss}
\def\sfbshape{ss}
\def\scshape{csc}
\def\scbshape{csc}

% Definitions for a main text size of 11pt.  (The default in Texinfo.)
%
\def\definetextfontsizexi{%
% Text fonts (11.2pt, magstep1).
\def\textnominalsize{11pt}
\edef\mainmagstep{\magstephalf}
\setfont\textrm\rmshape{10}{\mainmagstep}{OT1}
\setfont\texttt\ttshape{10}{\mainmagstep}{OT1TT}
\setfont\textbf\bfshape{10}{\mainmagstep}{OT1}
\setfont\textit\itshape{10}{\mainmagstep}{OT1IT}
\setfont\textsl\slshape{10}{\mainmagstep}{OT1}
\setfont\textsf\sfshape{10}{\mainmagstep}{OT1}
\setfont\textsc\scshape{10}{\mainmagstep}{OT1}
\setfont\textttsl\ttslshape{10}{\mainmagstep}{OT1TT}
\font\texti=cmmi10 scaled \mainmagstep
\font\textsy=cmsy10 scaled \mainmagstep
\def\textecsize{1095}

% A few fonts for @defun names and args.
\setfont\defbf\bfshape{10}{\magstep1}{OT1}
\setfont\deftt\ttshape{10}{\magstep1}{OT1TT}
\setfont\defttsl\ttslshape{10}{\magstep1}{OT1TT}
\def\df{\let\tentt=\deftt \let\tenbf = \defbf \let\tenttsl=\defttsl \bf}

% Fonts for indices, footnotes, small examples (9pt).
\def\smallnominalsize{9pt}
\setfont\smallrm\rmshape{9}{1000}{OT1}
\setfont\smalltt\ttshape{9}{1000}{OT1TT}
\setfont\smallbf\bfshape{10}{900}{OT1}
\setfont\smallit\itshape{9}{1000}{OT1IT}
\setfont\smallsl\slshape{9}{1000}{OT1}
\setfont\smallsf\sfshape{9}{1000}{OT1}
\setfont\smallsc\scshape{10}{900}{OT1}
\setfont\smallttsl\ttslshape{10}{900}{OT1TT}
\font\smalli=cmmi9
\font\smallsy=cmsy9
\def\smallecsize{0900}

% Fonts for small examples (8pt).
\def\smallernominalsize{8pt}
\setfont\smallerrm\rmshape{8}{1000}{OT1}
\setfont\smallertt\ttshape{8}{1000}{OT1TT}
\setfont\smallerbf\bfshape{10}{800}{OT1}
\setfont\smallerit\itshape{8}{1000}{OT1IT}
\setfont\smallersl\slshape{8}{1000}{OT1}
\setfont\smallersf\sfshape{8}{1000}{OT1}
\setfont\smallersc\scshape{10}{800}{OT1}
\setfont\smallerttsl\ttslshape{10}{800}{OT1TT}
\font\smalleri=cmmi8
\font\smallersy=cmsy8
\def\smallerecsize{0800}

% Fonts for title page (20.4pt):
\def\titlenominalsize{20pt}
\setfont\titlerm\rmbshape{12}{\magstep3}{OT1}
\setfont\titleit\itbshape{10}{\magstep4}{OT1IT}
\setfont\titlesl\slbshape{10}{\magstep4}{OT1}
\setfont\titlett\ttbshape{12}{\magstep3}{OT1TT}
\setfont\titlettsl\ttslshape{10}{\magstep4}{OT1TT}
\setfont\titlesf\sfbshape{17}{\magstep1}{OT1}
\let\titlebf=\titlerm
\setfont\titlesc\scbshape{10}{\magstep4}{OT1}
\font\titlei=cmmi12 scaled \magstep3
\font\titlesy=cmsy10 scaled \magstep4
\def\titleecsize{2074}

% Chapter (and unnumbered) fonts (17.28pt).
\def\chapnominalsize{17pt}
\setfont\chaprm\rmbshape{12}{\magstep2}{OT1}
\setfont\chapit\itbshape{10}{\magstep3}{OT1IT}
\setfont\chapsl\slbshape{10}{\magstep3}{OT1}
\setfont\chaptt\ttbshape{12}{\magstep2}{OT1TT}
\setfont\chapttsl\ttslshape{10}{\magstep3}{OT1TT}
\setfont\chapsf\sfbshape{17}{1000}{OT1}
\let\chapbf=\chaprm
\setfont\chapsc\scbshape{10}{\magstep3}{OT1}
\font\chapi=cmmi12 scaled \magstep2
\font\chapsy=cmsy10 scaled \magstep3
\def\chapecsize{1728}

% Section fonts (14.4pt).
\def\secnominalsize{14pt}
\setfont\secrm\rmbshape{12}{\magstep1}{OT1}
\setfont\secit\itbshape{10}{\magstep2}{OT1IT}
\setfont\secsl\slbshape{10}{\magstep2}{OT1}
\setfont\sectt\ttbshape{12}{\magstep1}{OT1TT}
\setfont\secttsl\ttslshape{10}{\magstep2}{OT1TT}
\setfont\secsf\sfbshape{12}{\magstep1}{OT1}
\let\secbf\secrm
\setfont\secsc\scbshape{10}{\magstep2}{OT1}
\font\seci=cmmi12 scaled \magstep1
\font\secsy=cmsy10 scaled \magstep2
\def\sececsize{1440}

% Subsection fonts (13.15pt).
\def\ssecnominalsize{13pt}
\setfont\ssecrm\rmbshape{12}{\magstephalf}{OT1}
\setfont\ssecit\itbshape{10}{1315}{OT1IT}
\setfont\ssecsl\slbshape{10}{1315}{OT1}
\setfont\ssectt\ttbshape{12}{\magstephalf}{OT1TT}
\setfont\ssecttsl\ttslshape{10}{1315}{OT1TT}
\setfont\ssecsf\sfbshape{12}{\magstephalf}{OT1}
\let\ssecbf\ssecrm
\setfont\ssecsc\scbshape{10}{1315}{OT1}
\font\sseci=cmmi12 scaled \magstephalf
\font\ssecsy=cmsy10 scaled 1315
\def\ssececsize{1200}

% Reduced fonts for @acro in text (10pt).
\def\reducednominalsize{10pt}
\setfont\reducedrm\rmshape{10}{1000}{OT1}
\setfont\reducedtt\ttshape{10}{1000}{OT1TT}
\setfont\reducedbf\bfshape{10}{1000}{OT1}
\setfont\reducedit\itshape{10}{1000}{OT1IT}
\setfont\reducedsl\slshape{10}{1000}{OT1}
\setfont\reducedsf\sfshape{10}{1000}{OT1}
\setfont\reducedsc\scshape{10}{1000}{OT1}
\setfont\reducedttsl\ttslshape{10}{1000}{OT1TT}
\font\reducedi=cmmi10
\font\reducedsy=cmsy10
\def\reducedecsize{1000}

\textleading = 13.2pt % line spacing for 11pt CM
\textfonts            % reset the current fonts
\rm
} % end of 11pt text font size definitions, \definetextfontsizexi


% Definitions to make the main text be 10pt Computer Modern, with
% section, chapter, etc., sizes following suit.  This is for the GNU
% Press printing of the Emacs 22 manual.  Maybe other manuals in the
% future.  Used with @smallbook, which sets the leading to 12pt.
%
\def\definetextfontsizex{%
% Text fonts (10pt).
\def\textnominalsize{10pt}
\edef\mainmagstep{1000}
\setfont\textrm\rmshape{10}{\mainmagstep}{OT1}
\setfont\texttt\ttshape{10}{\mainmagstep}{OT1TT}
\setfont\textbf\bfshape{10}{\mainmagstep}{OT1}
\setfont\textit\itshape{10}{\mainmagstep}{OT1IT}
\setfont\textsl\slshape{10}{\mainmagstep}{OT1}
\setfont\textsf\sfshape{10}{\mainmagstep}{OT1}
\setfont\textsc\scshape{10}{\mainmagstep}{OT1}
\setfont\textttsl\ttslshape{10}{\mainmagstep}{OT1TT}
\font\texti=cmmi10 scaled \mainmagstep
\font\textsy=cmsy10 scaled \mainmagstep
\def\textecsize{1000}

% A few fonts for @defun names and args.
\setfont\defbf\bfshape{10}{\magstephalf}{OT1}
\setfont\deftt\ttshape{10}{\magstephalf}{OT1TT}
\setfont\defttsl\ttslshape{10}{\magstephalf}{OT1TT}
\def\df{\let\tentt=\deftt \let\tenbf = \defbf \let\tenttsl=\defttsl \bf}

% Fonts for indices, footnotes, small examples (9pt).
\def\smallnominalsize{9pt}
\setfont\smallrm\rmshape{9}{1000}{OT1}
\setfont\smalltt\ttshape{9}{1000}{OT1TT}
\setfont\smallbf\bfshape{10}{900}{OT1}
\setfont\smallit\itshape{9}{1000}{OT1IT}
\setfont\smallsl\slshape{9}{1000}{OT1}
\setfont\smallsf\sfshape{9}{1000}{OT1}
\setfont\smallsc\scshape{10}{900}{OT1}
\setfont\smallttsl\ttslshape{10}{900}{OT1TT}
\font\smalli=cmmi9
\font\smallsy=cmsy9
\def\smallecsize{0900}

% Fonts for small examples (8pt).
\def\smallernominalsize{8pt}
\setfont\smallerrm\rmshape{8}{1000}{OT1}
\setfont\smallertt\ttshape{8}{1000}{OT1TT}
\setfont\smallerbf\bfshape{10}{800}{OT1}
\setfont\smallerit\itshape{8}{1000}{OT1IT}
\setfont\smallersl\slshape{8}{1000}{OT1}
\setfont\smallersf\sfshape{8}{1000}{OT1}
\setfont\smallersc\scshape{10}{800}{OT1}
\setfont\smallerttsl\ttslshape{10}{800}{OT1TT}
\font\smalleri=cmmi8
\font\smallersy=cmsy8
\def\smallerecsize{0800}

% Fonts for title page (20.4pt):
\def\titlenominalsize{20pt}
\setfont\titlerm\rmbshape{12}{\magstep3}{OT1}
\setfont\titleit\itbshape{10}{\magstep4}{OT1IT}
\setfont\titlesl\slbshape{10}{\magstep4}{OT1}
\setfont\titlett\ttbshape{12}{\magstep3}{OT1TT}
\setfont\titlettsl\ttslshape{10}{\magstep4}{OT1TT}
\setfont\titlesf\sfbshape{17}{\magstep1}{OT1}
\let\titlebf=\titlerm
\setfont\titlesc\scbshape{10}{\magstep4}{OT1}
\font\titlei=cmmi12 scaled \magstep3
\font\titlesy=cmsy10 scaled \magstep4
\def\titleecsize{2074}

% Chapter fonts (14.4pt).
\def\chapnominalsize{14pt}
\setfont\chaprm\rmbshape{12}{\magstep1}{OT1}
\setfont\chapit\itbshape{10}{\magstep2}{OT1IT}
\setfont\chapsl\slbshape{10}{\magstep2}{OT1}
\setfont\chaptt\ttbshape{12}{\magstep1}{OT1TT}
\setfont\chapttsl\ttslshape{10}{\magstep2}{OT1TT}
\setfont\chapsf\sfbshape{12}{\magstep1}{OT1}
\let\chapbf\chaprm
\setfont\chapsc\scbshape{10}{\magstep2}{OT1}
\font\chapi=cmmi12 scaled \magstep1
\font\chapsy=cmsy10 scaled \magstep2
\def\chapecsize{1440}

% Section fonts (12pt).
\def\secnominalsize{12pt}
\setfont\secrm\rmbshape{12}{1000}{OT1}
\setfont\secit\itbshape{10}{\magstep1}{OT1IT}
\setfont\secsl\slbshape{10}{\magstep1}{OT1}
\setfont\sectt\ttbshape{12}{1000}{OT1TT}
\setfont\secttsl\ttslshape{10}{\magstep1}{OT1TT}
\setfont\secsf\sfbshape{12}{1000}{OT1}
\let\secbf\secrm
\setfont\secsc\scbshape{10}{\magstep1}{OT1}
\font\seci=cmmi12
\font\secsy=cmsy10 scaled \magstep1
\def\sececsize{1200}

% Subsection fonts (10pt).
\def\ssecnominalsize{10pt}
\setfont\ssecrm\rmbshape{10}{1000}{OT1}
\setfont\ssecit\itbshape{10}{1000}{OT1IT}
\setfont\ssecsl\slbshape{10}{1000}{OT1}
\setfont\ssectt\ttbshape{10}{1000}{OT1TT}
\setfont\ssecttsl\ttslshape{10}{1000}{OT1TT}
\setfont\ssecsf\sfbshape{10}{1000}{OT1}
\let\ssecbf\ssecrm
\setfont\ssecsc\scbshape{10}{1000}{OT1}
\font\sseci=cmmi10
\font\ssecsy=cmsy10
\def\ssececsize{1000}

% Reduced fonts for @acro in text (9pt).
\def\reducednominalsize{9pt}
\setfont\reducedrm\rmshape{9}{1000}{OT1}
\setfont\reducedtt\ttshape{9}{1000}{OT1TT}
\setfont\reducedbf\bfshape{10}{900}{OT1}
\setfont\reducedit\itshape{9}{1000}{OT1IT}
\setfont\reducedsl\slshape{9}{1000}{OT1}
\setfont\reducedsf\sfshape{9}{1000}{OT1}
\setfont\reducedsc\scshape{10}{900}{OT1}
\setfont\reducedttsl\ttslshape{10}{900}{OT1TT}
\font\reducedi=cmmi9
\font\reducedsy=cmsy9
\def\reducedecsize{0900}

\divide\parskip by 2  % reduce space between paragraphs
\textleading = 12pt   % line spacing for 10pt CM
\textfonts            % reset the current fonts
\rm
} % end of 10pt text font size definitions, \definetextfontsizex


% We provide the user-level command
%   @fonttextsize 10
% (or 11) to redefine the text font size.  pt is assumed.
%
\def\xiword{11}
\def\xword{10}
\def\xwordpt{10pt}
%
\parseargdef\fonttextsize{%
  \def\textsizearg{#1}%
  %\wlog{doing @fonttextsize \textsizearg}%
  %
  % Set \globaldefs so that documents can use this inside @tex, since
  % makeinfo 4.8 does not support it, but we need it nonetheless.
  %
 \begingroup \globaldefs=1
  \ifx\textsizearg\xword \definetextfontsizex
  \else \ifx\textsizearg\xiword \definetextfontsizexi
  \else
    \errhelp=\EMsimple
    \errmessage{@fonttextsize only supports `10' or `11', not `\textsizearg'}
  \fi\fi
 \endgroup
}


% In order for the font changes to affect most math symbols and letters,
% we have to define the \textfont of the standard families.  Since
% texinfo doesn't allow for producing subscripts and superscripts except
% in the main text, we don't bother to reset \scriptfont and
% \scriptscriptfont (which would also require loading a lot more fonts).
%
\def\resetmathfonts{%
  \textfont0=\tenrm \textfont1=\teni \textfont2=\tensy
  \textfont\itfam=\tenit \textfont\slfam=\tensl \textfont\bffam=\tenbf
  \textfont\ttfam=\tentt \textfont\sffam=\tensf
}

% The font-changing commands redefine the meanings of \tenSTYLE, instead
% of just \STYLE.  We do this because \STYLE needs to also set the
% current \fam for math mode.  Our \STYLE (e.g., \rm) commands hardwire
% \tenSTYLE to set the current font.
%
% Each font-changing command also sets the names \lsize (one size lower)
% and \lllsize (three sizes lower).  These relative commands are used in
% the LaTeX logo and acronyms.
%
% This all needs generalizing, badly.
%
\def\textfonts{%
  \let\tenrm=\textrm \let\tenit=\textit \let\tensl=\textsl
  \let\tenbf=\textbf \let\tentt=\texttt \let\smallcaps=\textsc
  \let\tensf=\textsf \let\teni=\texti \let\tensy=\textsy
  \let\tenttsl=\textttsl
  \def\curfontsize{text}%
  \def\lsize{reduced}\def\lllsize{smaller}%
  \resetmathfonts \setleading{\textleading}}
\def\titlefonts{%
  \let\tenrm=\titlerm \let\tenit=\titleit \let\tensl=\titlesl
  \let\tenbf=\titlebf \let\tentt=\titlett \let\smallcaps=\titlesc
  \let\tensf=\titlesf \let\teni=\titlei \let\tensy=\titlesy
  \let\tenttsl=\titlettsl
  \def\curfontsize{title}%
  \def\lsize{chap}\def\lllsize{subsec}%
  \resetmathfonts \setleading{27pt}}
\def\titlefont#1{{\titlefonts\rmisbold #1}}
\def\chapfonts{%
  \let\tenrm=\chaprm \let\tenit=\chapit \let\tensl=\chapsl
  \let\tenbf=\chapbf \let\tentt=\chaptt \let\smallcaps=\chapsc
  \let\tensf=\chapsf \let\teni=\chapi \let\tensy=\chapsy
  \let\tenttsl=\chapttsl
  \def\curfontsize{chap}%
  \def\lsize{sec}\def\lllsize{text}%
  \resetmathfonts \setleading{19pt}}
\def\secfonts{%
  \let\tenrm=\secrm \let\tenit=\secit \let\tensl=\secsl
  \let\tenbf=\secbf \let\tentt=\sectt \let\smallcaps=\secsc
  \let\tensf=\secsf \let\teni=\seci \let\tensy=\secsy
  \let\tenttsl=\secttsl
  \def\curfontsize{sec}%
  \def\lsize{subsec}\def\lllsize{reduced}%
  \resetmathfonts \setleading{16pt}}
\def\subsecfonts{%
  \let\tenrm=\ssecrm \let\tenit=\ssecit \let\tensl=\ssecsl
  \let\tenbf=\ssecbf \let\tentt=\ssectt \let\smallcaps=\ssecsc
  \let\tensf=\ssecsf \let\teni=\sseci \let\tensy=\ssecsy
  \let\tenttsl=\ssecttsl
  \def\curfontsize{ssec}%
  \def\lsize{text}\def\lllsize{small}%
  \resetmathfonts \setleading{15pt}}
\let\subsubsecfonts = \subsecfonts
\def\reducedfonts{%
  \let\tenrm=\reducedrm \let\tenit=\reducedit \let\tensl=\reducedsl
  \let\tenbf=\reducedbf \let\tentt=\reducedtt \let\reducedcaps=\reducedsc
  \let\tensf=\reducedsf \let\teni=\reducedi \let\tensy=\reducedsy
  \let\tenttsl=\reducedttsl
  \def\curfontsize{reduced}%
  \def\lsize{small}\def\lllsize{smaller}%
  \resetmathfonts \setleading{10.5pt}}
\def\smallfonts{%
  \let\tenrm=\smallrm \let\tenit=\smallit \let\tensl=\smallsl
  \let\tenbf=\smallbf \let\tentt=\smalltt \let\smallcaps=\smallsc
  \let\tensf=\smallsf \let\teni=\smalli \let\tensy=\smallsy
  \let\tenttsl=\smallttsl
  \def\curfontsize{small}%
  \def\lsize{smaller}\def\lllsize{smaller}%
  \resetmathfonts \setleading{10.5pt}}
\def\smallerfonts{%
  \let\tenrm=\smallerrm \let\tenit=\smallerit \let\tensl=\smallersl
  \let\tenbf=\smallerbf \let\tentt=\smallertt \let\smallcaps=\smallersc
  \let\tensf=\smallersf \let\teni=\smalleri \let\tensy=\smallersy
  \let\tenttsl=\smallerttsl
  \def\curfontsize{smaller}%
  \def\lsize{smaller}\def\lllsize{smaller}%
  \resetmathfonts \setleading{9.5pt}}

% Fonts for short table of contents.
\setfont\shortcontrm\rmshape{12}{1000}{OT1}
\setfont\shortcontbf\bfshape{10}{\magstep1}{OT1}  % no cmb12
\setfont\shortcontsl\slshape{12}{1000}{OT1}
\setfont\shortconttt\ttshape{12}{1000}{OT1TT}

% Define these just so they can be easily changed for other fonts.
\def\angleleft{$\langle$}
\def\angleright{$\rangle$}

% Set the fonts to use with the @small... environments.
\let\smallexamplefonts = \smallfonts

% About \smallexamplefonts.  If we use \smallfonts (9pt), @smallexample
% can fit this many characters:
%   8.5x11=86   smallbook=72  a4=90  a5=69
% If we use \scriptfonts (8pt), then we can fit this many characters:
%   8.5x11=90+  smallbook=80  a4=90+  a5=77
% For me, subjectively, the few extra characters that fit aren't worth
% the additional smallness of 8pt.  So I'm making the default 9pt.
%
% By the way, for comparison, here's what fits with @example (10pt):
%   8.5x11=71  smallbook=60  a4=75  a5=58
% --karl, 24jan03.

% Set up the default fonts, so we can use them for creating boxes.
%
\definetextfontsizexi


\message{markup,}

% Check if we are currently using a typewriter font.  Since all the
% Computer Modern typewriter fonts have zero interword stretch (and
% shrink), and it is reasonable to expect all typewriter fonts to have
% this property, we can check that font parameter.
%
\def\ifmonospace{\ifdim\fontdimen3\font=0pt }

% Markup style infrastructure.  \defmarkupstylesetup\INITMACRO will
% define and register \INITMACRO to be called on markup style changes.
% \INITMACRO can check \currentmarkupstyle for the innermost
% style and the set of \ifmarkupSTYLE switches for all styles
% currently in effect.
\newif\ifmarkupvar
\newif\ifmarkupsamp
\newif\ifmarkupkey
%\newif\ifmarkupfile % @file == @samp.
%\newif\ifmarkupoption % @option == @samp.
\newif\ifmarkupcode
\newif\ifmarkupkbd
%\newif\ifmarkupenv % @env == @code.
%\newif\ifmarkupcommand % @command == @code.
\newif\ifmarkuptex % @tex (and part of @math, for now).
\newif\ifmarkupexample
\newif\ifmarkupverb
\newif\ifmarkupverbatim

\let\currentmarkupstyle\empty

\def\setupmarkupstyle#1{%
  \csname markup#1true\endcsname
  \def\currentmarkupstyle{#1}%
  \markupstylesetup
}

\let\markupstylesetup\empty

\def\defmarkupstylesetup#1{%
  \expandafter\def\expandafter\markupstylesetup
    \expandafter{\markupstylesetup #1}%
  \def#1%
}

% Markup style setup for left and right quotes.
\defmarkupstylesetup\markupsetuplq{%
  \expandafter\let\expandafter \temp
    \csname markupsetuplq\currentmarkupstyle\endcsname
  \ifx\temp\relax \markupsetuplqdefault \else \temp \fi
}

\defmarkupstylesetup\markupsetuprq{%
  \expandafter\let\expandafter \temp
    \csname markupsetuprq\currentmarkupstyle\endcsname
  \ifx\temp\relax \markupsetuprqdefault \else \temp \fi
}

{
\catcode`\'=\active
\catcode`\`=\active

\gdef\markupsetuplqdefault{\let`\lq}
\gdef\markupsetuprqdefault{\let'\rq}

\gdef\markupsetcodequoteleft{\let`\codequoteleft}
\gdef\markupsetcodequoteright{\let'\codequoteright}
}

\let\markupsetuplqcode \markupsetcodequoteleft
\let\markupsetuprqcode \markupsetcodequoteright
%
\let\markupsetuplqexample \markupsetcodequoteleft
\let\markupsetuprqexample \markupsetcodequoteright
%
\let\markupsetuplqkbd     \markupsetcodequoteleft
\let\markupsetuprqkbd     \markupsetcodequoteright
%
\let\markupsetuplqsamp \markupsetcodequoteleft
\let\markupsetuprqsamp \markupsetcodequoteright
%
\let\markupsetuplqverb \markupsetcodequoteleft
\let\markupsetuprqverb \markupsetcodequoteright
%
\let\markupsetuplqverbatim \markupsetcodequoteleft
\let\markupsetuprqverbatim \markupsetcodequoteright

% Allow an option to not use regular directed right quote/apostrophe
% (char 0x27), but instead the undirected quote from cmtt (char 0x0d).
% The undirected quote is ugly, so don't make it the default, but it
% works for pasting with more pdf viewers (at least evince), the
% lilypond developers report.  xpdf does work with the regular 0x27.
%
\def\codequoteright{%
  \expandafter\ifx\csname SETtxicodequoteundirected\endcsname\relax
    \expandafter\ifx\csname SETcodequoteundirected\endcsname\relax
      '%
    \else \char'15 \fi
  \else \char'15 \fi
}
%
% and a similar option for the left quote char vs. a grave accent.
% Modern fonts display ASCII 0x60 as a grave accent, so some people like
% the code environments to do likewise.
%
\def\codequoteleft{%
  \expandafter\ifx\csname SETtxicodequotebacktick\endcsname\relax
    \expandafter\ifx\csname SETcodequotebacktick\endcsname\relax
      % [Knuth] pp. 380,381,391
      % \relax disables Spanish ligatures ?` and !` of \tt font.
      \relax`%
    \else \char'22 \fi
  \else \char'22 \fi
}

% Commands to set the quote options.
% 
\parseargdef\codequoteundirected{%
  \def\temp{#1}%
  \ifx\temp\onword
    \expandafter\let\csname SETtxicodequoteundirected\endcsname
      = t%
  \else\ifx\temp\offword
    \expandafter\let\csname SETtxicodequoteundirected\endcsname
      = \relax
  \else
    \errhelp = \EMsimple
    \errmessage{Unknown @codequoteundirected value `\temp', must be on|off}%
  \fi\fi
}
%
\parseargdef\codequotebacktick{%
  \def\temp{#1}%
  \ifx\temp\onword
    \expandafter\let\csname SETtxicodequotebacktick\endcsname
      = t%
  \else\ifx\temp\offword
    \expandafter\let\csname SETtxicodequotebacktick\endcsname
      = \relax
  \else
    \errhelp = \EMsimple
    \errmessage{Unknown @codequotebacktick value `\temp', must be on|off}%
  \fi\fi
}

% [Knuth] pp. 380,381,391, disable Spanish ligatures ?` and !` of \tt font.
\def\noligaturesquoteleft{\relax\lq}

% Count depth in font-changes, for error checks
\newcount\fontdepth \fontdepth=0

% Font commands.

% #1 is the font command (\sl or \it), #2 is the text to slant.
% If we are in a monospaced environment, however, 1) always use \ttsl,
% and 2) do not add an italic correction.
\def\dosmartslant#1#2{%
  \ifusingtt 
    {{\ttsl #2}\let\next=\relax}%
    {\def\next{{#1#2}\futurelet\next\smartitaliccorrection}}%
  \next
}
\def\smartslanted{\dosmartslant\sl}
\def\smartitalic{\dosmartslant\it}

% Output an italic correction unless \next (presumed to be the following
% character) is such as not to need one.
\def\smartitaliccorrection{%
  \ifx\next,%
  \else\ifx\next-%
  \else\ifx\next.%
  \else\ptexslash
  \fi\fi\fi
  \aftersmartic
}

% Unconditional use \ttsl, and no ic.  @var is set to this for defuns.
\def\ttslanted#1{{\ttsl #1}}

% @cite is like \smartslanted except unconditionally use \sl.  We never want
% ttsl for book titles, do we?
\def\cite#1{{\sl #1}\futurelet\next\smartitaliccorrection}

\def\aftersmartic{}
\def\var#1{%
  \let\saveaftersmartic = \aftersmartic
  \def\aftersmartic{\null\let\aftersmartic=\saveaftersmartic}%
  \smartslanted{#1}%
}

\let\i=\smartitalic
\let\slanted=\smartslanted
\let\dfn=\smartslanted
\let\emph=\smartitalic

% Explicit font changes: @r, @sc, undocumented @ii.
\def\r#1{{\rm #1}}              % roman font
\def\sc#1{{\smallcaps#1}}       % smallcaps font
\def\ii#1{{\it #1}}             % italic font

% @b, explicit bold.  Also @strong.
\def\b#1{{\bf #1}}
\let\strong=\b

% @sansserif, explicit sans.
\def\sansserif#1{{\sf #1}}

% We can't just use \exhyphenpenalty, because that only has effect at
% the end of a paragraph.  Restore normal hyphenation at the end of the
% group within which \nohyphenation is presumably called.
%
\def\nohyphenation{\hyphenchar\font = -1  \aftergroup\restorehyphenation}
\def\restorehyphenation{\hyphenchar\font = `- }

% Set sfcode to normal for the chars that usually have another value.
% Can't use plain's \frenchspacing because it uses the `\x notation, and
% sometimes \x has an active definition that messes things up.
%
\catcode`@=11
  \def\plainfrenchspacing{%
    \sfcode\dotChar  =\@m \sfcode\questChar=\@m \sfcode\exclamChar=\@m
    \sfcode\colonChar=\@m \sfcode\semiChar =\@m \sfcode\commaChar =\@m
    \def\endofsentencespacefactor{1000}% for @. and friends
  }
  \def\plainnonfrenchspacing{%
    \sfcode`\.3000\sfcode`\?3000\sfcode`\!3000
    \sfcode`\:2000\sfcode`\;1500\sfcode`\,1250
    \def\endofsentencespacefactor{3000}% for @. and friends
  }
\catcode`@=\other
\def\endofsentencespacefactor{3000}% default

% @t, explicit typewriter.
\def\t#1{%
  {\tt \rawbackslash \plainfrenchspacing #1}%
  \null
}

% @samp.
\def\samp#1{{\setupmarkupstyle{samp}\lq\tclose{#1}\rq\null}}

% @indicateurl is \samp, that is, with quotes.
\let\indicateurl=\samp

% @code (and similar) prints in typewriter, but with spaces the same
% size as normal in the surrounding text, without hyphenation, etc.
% This is a subroutine for that.
\def\tclose#1{%
  {%
    % Change normal interword space to be same as for the current font.
    \spaceskip = \fontdimen2\font
    %
    % Switch to typewriter.
    \tt
    %
    % But `\ ' produces the large typewriter interword space.
    \def\ {{\spaceskip = 0pt{} }}%
    %
    % Turn off hyphenation.
    \nohyphenation
    %
    \rawbackslash
    \plainfrenchspacing
    #1%
  }%
  \null % reset spacefactor to 1000
}

% We *must* turn on hyphenation at `-' and `_' in @code.
% Otherwise, it is too hard to avoid overfull hboxes
% in the Emacs manual, the Library manual, etc.
%
% Unfortunately, TeX uses one parameter (\hyphenchar) to control
% both hyphenation at - and hyphenation within words.
% We must therefore turn them both off (\tclose does that)
% and arrange explicitly to hyphenate at a dash.
%  -- rms.
{
  \catcode`\-=\active \catcode`\_=\active
  \catcode`\'=\active \catcode`\`=\active
  \global\let'=\rq \global\let`=\lq  % default definitions
  %
  \global\def\code{\begingroup
    \setupmarkupstyle{code}%
    % The following should really be moved into \setupmarkupstyle handlers.
    \catcode\dashChar=\active  \catcode\underChar=\active
    \ifallowcodebreaks
     \let-\codedash
     \let_\codeunder
    \else
     \let-\realdash
     \let_\realunder
    \fi
    \codex
  }
}

\def\codex #1{\tclose{#1}\endgroup}

\def\realdash{-}
\def\codedash{-\discretionary{}{}{}}
\def\codeunder{%
  % this is all so @math{@code{var_name}+1} can work.  In math mode, _
  % is "active" (mathcode"8000) and \normalunderscore (or \char95, etc.)
  % will therefore expand the active definition of _, which is us
  % (inside @code that is), therefore an endless loop.
  \ifusingtt{\ifmmode
               \mathchar"075F % class 0=ordinary, family 7=ttfam, pos 0x5F=_.
             \else\normalunderscore \fi
             \discretionary{}{}{}}%
            {\_}%
}

% An additional complication: the above will allow breaks after, e.g.,
% each of the four underscores in __typeof__.  This is undesirable in
% some manuals, especially if they don't have long identifiers in
% general.  @allowcodebreaks provides a way to control this.
%
\newif\ifallowcodebreaks  \allowcodebreakstrue

\def\keywordtrue{true}
\def\keywordfalse{false}

\parseargdef\allowcodebreaks{%
  \def\txiarg{#1}%
  \ifx\txiarg\keywordtrue
    \allowcodebreakstrue
  \else\ifx\txiarg\keywordfalse
    \allowcodebreaksfalse
  \else
    \errhelp = \EMsimple
    \errmessage{Unknown @allowcodebreaks option `\txiarg', must be true|false}%
  \fi\fi
}

% For @command, @env, @file, @option quotes seem unnecessary,
% so use \code rather than \samp.
\let\command=\code
\let\env=\code
\let\file=\code
\let\option=\code

% @uref (abbreviation for `urlref') takes an optional (comma-separated)
% second argument specifying the text to display and an optional third
% arg as text to display instead of (rather than in addition to) the url
% itself.  First (mandatory) arg is the url.
% (This \urefnobreak definition isn't used now, leaving it for a while
% for comparison.)
\def\urefnobreak#1{\dourefnobreak #1,,,\finish}
\def\dourefnobreak#1,#2,#3,#4\finish{\begingroup
  \unsepspaces
  \pdfurl{#1}%
  \setbox0 = \hbox{\ignorespaces #3}%
  \ifdim\wd0 > 0pt
    \unhbox0 % third arg given, show only that
  \else
    \setbox0 = \hbox{\ignorespaces #2}%
    \ifdim\wd0 > 0pt
      \ifpdf
        \unhbox0             % PDF: 2nd arg given, show only it
      \else
        \unhbox0\ (\code{#1})% DVI: 2nd arg given, show both it and url
      \fi
    \else
      \code{#1}% only url given, so show it
    \fi
  \fi
  \endlink
\endgroup}

% This \urefbreak definition is the active one.
\def\urefbreak{\begingroup \urefcatcodes \dourefbreak}
\let\uref=\urefbreak
\def\dourefbreak#1{\urefbreakfinish #1,,,\finish}
\def\urefbreakfinish#1,#2,#3,#4\finish{% doesn't work in @example
  \unsepspaces
  \pdfurl{#1}%
  \setbox0 = \hbox{\ignorespaces #3}%
  \ifdim\wd0 > 0pt
    \unhbox0 % third arg given, show only that
  \else
    \setbox0 = \hbox{\ignorespaces #2}%
    \ifdim\wd0 > 0pt
      \ifpdf
        \unhbox0             % PDF: 2nd arg given, show only it
      \else
        \unhbox0\ (\urefcode{#1})% DVI: 2nd arg given, show both it and url
      \fi
    \else
      \urefcode{#1}% only url given, so show it
    \fi
  \fi
  \endlink
\endgroup}

% Allow line breaks around only a few characters (only).
\def\urefcatcodes{%
  \catcode\ampChar=\active   \catcode\dotChar=\active
  \catcode\hashChar=\active  \catcode\questChar=\active
  \catcode\slashChar=\active
}
{
  \urefcatcodes
  %
  \global\def\urefcode{\begingroup
    \setupmarkupstyle{code}%
    \urefcatcodes
    \let&\urefcodeamp
    \let.\urefcodedot
    \let#\urefcodehash
    \let?\urefcodequest
    \let/\urefcodeslash
    \codex
  }
  %
  % By default, they are just regular characters.
  \global\def&{\normalamp}
  \global\def.{\normaldot}
  \global\def#{\normalhash}
  \global\def?{\normalquest}
  \global\def/{\normalslash}
}

% we put a little stretch before and after the breakable chars, to help
% line breaking of long url's.  The unequal skips make look better in
% cmtt at least, especially for dots.
\def\urefprestretch{\urefprebreak \hskip0pt plus.13em }
\def\urefpoststretch{\urefpostbreak \hskip0pt plus.1em }
%
\def\urefcodeamp{\urefprestretch \&\urefpoststretch}
\def\urefcodedot{\urefprestretch .\urefpoststretch}
\def\urefcodehash{\urefprestretch \#\urefpoststretch}
\def\urefcodequest{\urefprestretch ?\urefpoststretch}
\def\urefcodeslash{\futurelet\next\urefcodeslashfinish}
{
  \catcode`\/=\active
  \global\def\urefcodeslashfinish{%
    \urefprestretch \slashChar
    % Allow line break only after the final / in a sequence of
    % slashes, to avoid line break between the slashes in http://.
    \ifx\next/\else \urefpoststretch \fi
  }
}

% One more complication: by default we'll break after the special
% characters, but some people like to break before the special chars, so
% allow that.  Also allow no breaking at all, for manual control.
% 
\parseargdef\urefbreakstyle{%
  \def\txiarg{#1}%
  \ifx\txiarg\wordnone
    \def\urefprebreak{\nobreak}\def\urefpostbreak{\nobreak}
  \else\ifx\txiarg\wordbefore
    \def\urefprebreak{\allowbreak}\def\urefpostbreak{\nobreak}
  \else\ifx\txiarg\wordafter
    \def\urefprebreak{\nobreak}\def\urefpostbreak{\allowbreak}
  \else
    \errhelp = \EMsimple
    \errmessage{Unknown @urefbreakstyle setting `\txiarg'}%
  \fi\fi\fi
}
\def\wordafter{after}
\def\wordbefore{before}
\def\wordnone{none}

\urefbreakstyle after

% @url synonym for @uref, since that's how everyone uses it.
%
\let\url=\uref

% rms does not like angle brackets --karl, 17may97.
% So now @email is just like @uref, unless we are pdf.
%
%\def\email#1{\angleleft{\tt #1}\angleright}
\ifpdf
  \def\email#1{\doemail#1,,\finish}
  \def\doemail#1,#2,#3\finish{\begingroup
    \unsepspaces
    \pdfurl{mailto:#1}%
    \setbox0 = \hbox{\ignorespaces #2}%
    \ifdim\wd0>0pt\unhbox0\else\code{#1}\fi
    \endlink
  \endgroup}
\else
  \let\email=\uref
\fi

% @kbdinputstyle -- arg is `distinct' (@kbd uses slanted tty font always),
%   `example' (@kbd uses ttsl only inside of @example and friends),
%   or `code' (@kbd uses normal tty font always).
\parseargdef\kbdinputstyle{%
  \def\txiarg{#1}%
  \ifx\txiarg\worddistinct
    \gdef\kbdexamplefont{\ttsl}\gdef\kbdfont{\ttsl}%
  \else\ifx\txiarg\wordexample
    \gdef\kbdexamplefont{\ttsl}\gdef\kbdfont{\tt}%
  \else\ifx\txiarg\wordcode
    \gdef\kbdexamplefont{\tt}\gdef\kbdfont{\tt}%
  \else
    \errhelp = \EMsimple
    \errmessage{Unknown @kbdinputstyle setting `\txiarg'}%
  \fi\fi\fi
}
\def\worddistinct{distinct}
\def\wordexample{example}
\def\wordcode{code}

% Default is `distinct'.
\kbdinputstyle distinct

% @kbd is like @code, except that if the argument is just one @key command,
% then @kbd has no effect.
\def\kbd#1{{\def\look{#1}\expandafter\kbdsub\look??\par}}

\def\xkey{\key}
\def\kbdsub#1#2#3\par{%
  \def\one{#1}\def\three{#3}\def\threex{??}%
  \ifx\one\xkey\ifx\threex\three \key{#2}%
  \else{\tclose{\kbdfont\setupmarkupstyle{kbd}\look}}\fi
  \else{\tclose{\kbdfont\setupmarkupstyle{kbd}\look}}\fi
}

% definition of @key that produces a lozenge.  Doesn't adjust to text size.
%\setfont\keyrm\rmshape{8}{1000}{OT1}
%\font\keysy=cmsy9
%\def\key#1{{\keyrm\textfont2=\keysy \leavevmode\hbox{%
%  \raise0.4pt\hbox{\angleleft}\kern-.08em\vtop{%
%    \vbox{\hrule\kern-0.4pt
%     \hbox{\raise0.4pt\hbox{\vphantom{\angleleft}}#1}}%
%    \kern-0.4pt\hrule}%
%  \kern-.06em\raise0.4pt\hbox{\angleright}}}}

% definition of @key with no lozenge.  If the current font is already
% monospace, don't change it; that way, we respect @kbdinputstyle.  But
% if it isn't monospace, then use \tt.
%
\def\key#1{{\setupmarkupstyle{key}%
  \nohyphenation
  \ifmonospace\else\tt\fi
  #1}\null}

% @clicksequence{File @click{} Open ...}
\def\clicksequence#1{\begingroup #1\endgroup}

% @clickstyle @arrow   (by default)
\parseargdef\clickstyle{\def\click{#1}}
\def\click{\arrow}

% Typeset a dimension, e.g., `in' or `pt'.  The only reason for the
% argument is to make the input look right: @dmn{pt} instead of @dmn{}pt.
%
\def\dmn#1{\thinspace #1}

% @l was never documented to mean ``switch to the Lisp font'',
% and it is not used as such in any manual I can find.  We need it for
% Polish suppressed-l.  --karl, 22sep96.
%\def\l#1{{\li #1}\null}

% @acronym for "FBI", "NATO", and the like.
% We print this one point size smaller, since it's intended for
% all-uppercase.
%
\def\acronym#1{\doacronym #1,,\finish}
\def\doacronym#1,#2,#3\finish{%
  {\selectfonts\lsize #1}%
  \def\temp{#2}%
  \ifx\temp\empty \else
    \space ({\unsepspaces \ignorespaces \temp \unskip})%
  \fi
  \null % reset \spacefactor=1000
}

% @abbr for "Comput. J." and the like.
% No font change, but don't do end-of-sentence spacing.
%
\def\abbr#1{\doabbr #1,,\finish}
\def\doabbr#1,#2,#3\finish{%
  {\plainfrenchspacing #1}%
  \def\temp{#2}%
  \ifx\temp\empty \else
    \space ({\unsepspaces \ignorespaces \temp \unskip})%
  \fi
  \null % reset \spacefactor=1000
}

% @asis just yields its argument.  Used with @table, for example.
%
\def\asis#1{#1}

% @math outputs its argument in math mode.
%
% One complication: _ usually means subscripts, but it could also mean
% an actual _ character, as in @math{@var{some_variable} + 1}.  So make
% _ active, and distinguish by seeing if the current family is \slfam,
% which is what @var uses.
{
  \catcode`\_ = \active
  \gdef\mathunderscore{%
    \catcode`\_=\active
    \def_{\ifnum\fam=\slfam \_\else\sb\fi}%
  }
}
% Another complication: we want \\ (and @\) to output a math (or tt) \.
% FYI, plain.tex uses \\ as a temporary control sequence (for no
% particular reason), but this is not advertised and we don't care.
%
% The \mathchar is class=0=ordinary, family=7=ttfam, position=5C=\.
\def\mathbackslash{\ifnum\fam=\ttfam \mathchar"075C \else\backslash \fi}
%
\def\math{%
  \tex
  \mathunderscore
  \let\\ = \mathbackslash
  \mathactive
  % make the texinfo accent commands work in math mode
  \let\"=\ddot
  \let\'=\acute
  \let\==\bar
  \let\^=\hat
  \let\`=\grave
  \let\u=\breve
  \let\v=\check
  \let\~=\tilde
  \let\dotaccent=\dot
  $\finishmath
}
\def\finishmath#1{#1$\endgroup}  % Close the group opened by \tex.

% Some active characters (such as <) are spaced differently in math.
% We have to reset their definitions in case the @math was an argument
% to a command which sets the catcodes (such as @item or @section).
%
{
  \catcode`^ = \active
  \catcode`< = \active
  \catcode`> = \active
  \catcode`+ = \active
  \catcode`' = \active
  \gdef\mathactive{%
    \let^ = \ptexhat
    \let< = \ptexless
    \let> = \ptexgtr
    \let+ = \ptexplus
    \let' = \ptexquoteright
  }
}

% ctrl is no longer a Texinfo command, but leave this definition for fun.
\def\ctrl #1{{\tt \rawbackslash \hat}#1}

% @inlinefmt{FMTNAME,PROCESSED-TEXT} and @inlineraw{FMTNAME,RAW-TEXT}.
% Ignore unless FMTNAME == tex; then it is like @iftex and @tex,
% except specified as a normal braced arg, so no newlines to worry about.
% 
\def\outfmtnametex{tex}
%
\long\def\inlinefmt#1{\doinlinefmt #1,\finish}
\long\def\doinlinefmt#1,#2,\finish{%
  \def\inlinefmtname{#1}%
  \ifx\inlinefmtname\outfmtnametex \ignorespaces #2\fi
}
% For raw, must switch into @tex before parsing the argument, to avoid
% setting catcodes prematurely.  Doing it this way means that, for
% example, @inlineraw{html, foo{bar} gets a parse error instead of being
% ignored.  But this isn't important because if people want a literal
% *right* brace they would have to use a command anyway, so they may as
% well use a command to get a left brace too.  We could re-use the
% delimiter character idea from \verb, but it seems like overkill.
% 
\long\def\inlineraw{\tex \doinlineraw}
\long\def\doinlineraw#1{\doinlinerawtwo #1,\finish}
\def\doinlinerawtwo#1,#2,\finish{%
  \def\inlinerawname{#1}%
  \ifx\inlinerawname\outfmtnametex \ignorespaces #2\fi
  \endgroup % close group opened by \tex.
}


\message{glyphs,}
% and logos.

% @@ prints an @, as does @atchar{}.
\def\@{\char64 }
\let\atchar=\@

% @{ @} @lbracechar{} @rbracechar{} all generate brace characters.
% Unless we're in typewriter, use \ecfont because the CM text fonts do
% not have braces, and we don't want to switch into math.
\def\mylbrace{{\ifmonospace\else\ecfont\fi \char123}}
\def\myrbrace{{\ifmonospace\else\ecfont\fi \char125}}
\let\{=\mylbrace \let\lbracechar=\{
\let\}=\myrbrace \let\rbracechar=\}
\begingroup
  % Definitions to produce \{ and \} commands for indices,
  % and @{ and @} for the aux/toc files.
  \catcode`\{ = \other \catcode`\} = \other
  \catcode`\[ = 1 \catcode`\] = 2
  \catcode`\! = 0 \catcode`\\ = \other
  !gdef!lbracecmd[\{]%
  !gdef!rbracecmd[\}]%
  !gdef!lbraceatcmd[@{]%
  !gdef!rbraceatcmd[@}]%
!endgroup

% @comma{} to avoid , parsing problems.
\let\comma = ,

% Accents: @, @dotaccent @ringaccent @ubaraccent @udotaccent
% Others are defined by plain TeX: @` @' @" @^ @~ @= @u @v @H.
\let\, = \ptexc
\let\dotaccent = \ptexdot
\def\ringaccent#1{{\accent23 #1}}
\let\tieaccent = \ptext
\let\ubaraccent = \ptexb
\let\udotaccent = \d

% Other special characters: @questiondown @exclamdown @ordf @ordm
% Plain TeX defines: @AA @AE @O @OE @L (plus lowercase versions) @ss.
\def\questiondown{?`}
\def\exclamdown{!`}
\def\ordf{\leavevmode\raise1ex\hbox{\selectfonts\lllsize \underbar{a}}}
\def\ordm{\leavevmode\raise1ex\hbox{\selectfonts\lllsize \underbar{o}}}

% Dotless i and dotless j, used for accents.
\def\imacro{i}
\def\jmacro{j}
\def\dotless#1{%
  \def\temp{#1}%
  \ifx\temp\imacro \ifmmode\imath \else\ptexi \fi
  \else\ifx\temp\jmacro \ifmmode\jmath \else\j \fi
  \else \errmessage{@dotless can be used only with i or j}%
  \fi\fi
}

% The \TeX{} logo, as in plain, but resetting the spacing so that a
% period following counts as ending a sentence.  (Idea found in latex.)
%
\edef\TeX{\TeX \spacefactor=1000 }

% @LaTeX{} logo.  Not quite the same results as the definition in
% latex.ltx, since we use a different font for the raised A; it's most
% convenient for us to use an explicitly smaller font, rather than using
% the \scriptstyle font (since we don't reset \scriptstyle and
% \scriptscriptstyle).
%
\def\LaTeX{%
  L\kern-.36em
  {\setbox0=\hbox{T}%
   \vbox to \ht0{\hbox{%
     \ifx\textnominalsize\xwordpt
       % for 10pt running text, \lllsize (8pt) is too small for the A in LaTeX.
       % Revert to plain's \scriptsize, which is 7pt.
       \count255=\the\fam $\fam\count255 \scriptstyle A$%
     \else
       % For 11pt, we can use our lllsize.
       \selectfonts\lllsize A%
     \fi
     }%
     \vss
  }}%
  \kern-.15em
  \TeX
}

% Some math mode symbols.
\def\bullet{$\ptexbullet$}
\def\geq{\ifmmode \ge\else $\ge$\fi}
\def\leq{\ifmmode \le\else $\le$\fi}
\def\minus{\ifmmode -\else $-$\fi}

% @dots{} outputs an ellipsis using the current font.
% We do .5em per period so that it has the same spacing in the cm
% typewriter fonts as three actual period characters; on the other hand,
% in other typewriter fonts three periods are wider than 1.5em.  So do
% whichever is larger.
%
\def\dots{%
  \leavevmode
  \setbox0=\hbox{...}% get width of three periods
  \ifdim\wd0 > 1.5em
    \dimen0 = \wd0
  \else
    \dimen0 = 1.5em
  \fi
  \hbox to \dimen0{%
    \hskip 0pt plus.25fil
    .\hskip 0pt plus1fil
    .\hskip 0pt plus1fil
    .\hskip 0pt plus.5fil
  }%
}

% @enddots{} is an end-of-sentence ellipsis.
%
\def\enddots{%
  \dots
  \spacefactor=\endofsentencespacefactor
}

% @point{}, @result{}, @expansion{}, @print{}, @equiv{}.
%
% Since these characters are used in examples, they should be an even number of
% \tt widths. Each \tt character is 1en, so two makes it 1em.
%
\def\point{$\star$}
\def\arrow{\leavevmode\raise.05ex\hbox to 1em{\hfil$\rightarrow$\hfil}}
\def\result{\leavevmode\raise.05ex\hbox to 1em{\hfil$\Rightarrow$\hfil}}
\def\expansion{\leavevmode\hbox to 1em{\hfil$\mapsto$\hfil}}
\def\print{\leavevmode\lower.1ex\hbox to 1em{\hfil$\dashv$\hfil}}
\def\equiv{\leavevmode\hbox to 1em{\hfil$\ptexequiv$\hfil}}

% The @error{} command.
% Adapted from the TeXbook's \boxit.
%
\newbox\errorbox
%
{\tentt \global\dimen0 = 3em}% Width of the box.
\dimen2 = .55pt % Thickness of rules
% The text. (`r' is open on the right, `e' somewhat less so on the left.)
\setbox0 = \hbox{\kern-.75pt \reducedsf \putworderror\kern-1.5pt}
%
\setbox\errorbox=\hbox to \dimen0{\hfil
   \hsize = \dimen0 \advance\hsize by -5.8pt % Space to left+right.
   \advance\hsize by -2\dimen2 % Rules.
   \vbox{%
      \hrule height\dimen2
      \hbox{\vrule width\dimen2 \kern3pt          % Space to left of text.
         \vtop{\kern2.4pt \box0 \kern2.4pt}% Space above/below.
         \kern3pt\vrule width\dimen2}% Space to right.
      \hrule height\dimen2}
    \hfil}
%
\def\error{\leavevmode\lower.7ex\copy\errorbox}

% @pounds{} is a sterling sign, which Knuth put in the CM italic font.
%
\def\pounds{{\it\$}}

% @euro{} comes from a separate font, depending on the current style.
% We use the free feym* fonts from the eurosym package by Henrik
% Theiling, which support regular, slanted, bold and bold slanted (and
% "outlined" (blackboard board, sort of) versions, which we don't need).
% It is available from http://www.ctan.org/tex-archive/fonts/eurosym.
%
% Although only regular is the truly official Euro symbol, we ignore
% that.  The Euro is designed to be slightly taller than the regular
% font height.
%
% feymr - regular
% feymo - slanted
% feybr - bold
% feybo - bold slanted
%
% There is no good (free) typewriter version, to my knowledge.
% A feymr10 euro is ~7.3pt wide, while a normal cmtt10 char is ~5.25pt wide.
% Hmm.
%
% Also doesn't work in math.  Do we need to do math with euro symbols?
% Hope not.
%
%
\def\euro{{\eurofont e}}
\def\eurofont{%
  % We set the font at each command, rather than predefining it in
  % \textfonts and the other font-switching commands, so that
  % installations which never need the symbol don't have to have the
  % font installed.
  %
  % There is only one designed size (nominal 10pt), so we always scale
  % that to the current nominal size.
  %
  % By the way, simply using "at 1em" works for cmr10 and the like, but
  % does not work for cmbx10 and other extended/shrunken fonts.
  %
  \def\eurosize{\csname\curfontsize nominalsize\endcsname}%
  %
  \ifx\curfontstyle\bfstylename
    % bold:
    \font\thiseurofont = \ifusingit{feybo10}{feybr10} at \eurosize
  \else
    % regular:
    \font\thiseurofont = \ifusingit{feymo10}{feymr10} at \eurosize
  \fi
  \thiseurofont
}

% Glyphs from the EC fonts.  We don't use \let for the aliases, because
% sometimes we redefine the original macro, and the alias should reflect
% the redefinition.
%
% Use LaTeX names for the Icelandic letters.
\def\DH{{\ecfont \char"D0}} % Eth
\def\dh{{\ecfont \char"F0}} % eth
\def\TH{{\ecfont \char"DE}} % Thorn
\def\th{{\ecfont \char"FE}} % thorn
%
\def\guillemetleft{{\ecfont \char"13}}
\def\guillemotleft{\guillemetleft}
\def\guillemetright{{\ecfont \char"14}}
\def\guillemotright{\guillemetright}
\def\guilsinglleft{{\ecfont \char"0E}}
\def\guilsinglright{{\ecfont \char"0F}}
\def\quotedblbase{{\ecfont \char"12}}
\def\quotesinglbase{{\ecfont \char"0D}}
%
% This positioning is not perfect (see the ogonek LaTeX package), but
% we have the precomposed glyphs for the most common cases.  We put the
% tests to use those glyphs in the single \ogonek macro so we have fewer
% dummy definitions to worry about for index entries, etc.
%
% ogonek is also used with other letters in Lithuanian (IOU), but using
% the precomposed glyphs for those is not so easy since they aren't in
% the same EC font.
\def\ogonek#1{{%
  \def\temp{#1}%
  \ifx\temp\macrocharA\Aogonek
  \else\ifx\temp\macrochara\aogonek
  \else\ifx\temp\macrocharE\Eogonek
  \else\ifx\temp\macrochare\eogonek
  \else
    \ecfont \setbox0=\hbox{#1}%
    \ifdim\ht0=1ex\accent"0C #1%
    \else\ooalign{\unhbox0\crcr\hidewidth\char"0C \hidewidth}%
    \fi
  \fi\fi\fi\fi
  }%
}
\def\Aogonek{{\ecfont \char"81}}\def\macrocharA{A}
\def\aogonek{{\ecfont \char"A1}}\def\macrochara{a}
\def\Eogonek{{\ecfont \char"86}}\def\macrocharE{E}
\def\eogonek{{\ecfont \char"A6}}\def\macrochare{e}
%
% Use the ec* fonts (cm-super in outline format) for non-CM glyphs.
\def\ecfont{%
  % We can't distinguish serif/sans and italic/slanted, but this
  % is used for crude hacks anyway (like adding French and German
  % quotes to documents typeset with CM, where we lose kerning), so
  % hopefully nobody will notice/care.
  \edef\ecsize{\csname\curfontsize ecsize\endcsname}%
  \edef\nominalsize{\csname\curfontsize nominalsize\endcsname}%
  \ifmonospace
    % typewriter:
    \font\thisecfont = ectt\ecsize \space at \nominalsize
  \else
    \ifx\curfontstyle\bfstylename
      % bold:
      \font\thisecfont = ecb\ifusingit{i}{x}\ecsize \space at \nominalsize
    \else
      % regular:
      \font\thisecfont = ec\ifusingit{ti}{rm}\ecsize \space at \nominalsize
    \fi
  \fi
  \thisecfont
}

% @registeredsymbol - R in a circle.  The font for the R should really
% be smaller yet, but lllsize is the best we can do for now.
% Adapted from the plain.tex definition of \copyright.
%
\def\registeredsymbol{%
  $^{{\ooalign{\hfil\raise.07ex\hbox{\selectfonts\lllsize R}%
               \hfil\crcr\Orb}}%
    }$%
}

% @textdegree - the normal degrees sign.
%
\def\textdegree{$^\circ$}

% Laurent Siebenmann reports \Orb undefined with:
%  Textures 1.7.7 (preloaded format=plain 93.10.14)  (68K)  16 APR 2004 02:38
% so we'll define it if necessary.
%
\ifx\Orb\thisisundefined
\def\Orb{\mathhexbox20D}
\fi

% Quotes.
\chardef\quotedblleft="5C
\chardef\quotedblright=`\"
\chardef\quoteleft=`\`
\chardef\quoteright=`\'


\message{page headings,}

\newskip\titlepagetopglue \titlepagetopglue = 1.5in
\newskip\titlepagebottomglue \titlepagebottomglue = 2pc

% First the title page.  Must do @settitle before @titlepage.
\newif\ifseenauthor
\newif\iffinishedtitlepage

% Do an implicit @contents or @shortcontents after @end titlepage if the
% user says @setcontentsaftertitlepage or @setshortcontentsaftertitlepage.
%
\newif\ifsetcontentsaftertitlepage
 \let\setcontentsaftertitlepage = \setcontentsaftertitlepagetrue
\newif\ifsetshortcontentsaftertitlepage
 \let\setshortcontentsaftertitlepage = \setshortcontentsaftertitlepagetrue

\parseargdef\shorttitlepage{%
  \begingroup \hbox{}\vskip 1.5in \chaprm \centerline{#1}%
  \endgroup\page\hbox{}\page}

\envdef\titlepage{%
  % Open one extra group, as we want to close it in the middle of \Etitlepage.
  \begingroup
    \parindent=0pt \textfonts
    % Leave some space at the very top of the page.
    \vglue\titlepagetopglue
    % No rule at page bottom unless we print one at the top with @title.
    \finishedtitlepagetrue
    %
    % Most title ``pages'' are actually two pages long, with space
    % at the top of the second.  We don't want the ragged left on the second.
    \let\oldpage = \page
    \def\page{%
      \iffinishedtitlepage\else
	 \finishtitlepage
      \fi
      \let\page = \oldpage
      \page
      \null
    }%
}

\def\Etitlepage{%
    \iffinishedtitlepage\else
	\finishtitlepage
    \fi
    % It is important to do the page break before ending the group,
    % because the headline and footline are only empty inside the group.
    % If we use the new definition of \page, we always get a blank page
    % after the title page, which we certainly don't want.
    \oldpage
  \endgroup
  %
  % Need this before the \...aftertitlepage checks so that if they are
  % in effect the toc pages will come out with page numbers.
  \HEADINGSon
  %
  % If they want short, they certainly want long too.
  \ifsetshortcontentsaftertitlepage
    \shortcontents
    \contents
    \global\let\shortcontents = \relax
    \global\let\contents = \relax
  \fi
  %
  \ifsetcontentsaftertitlepage
    \contents
    \global\let\contents = \relax
    \global\let\shortcontents = \relax
  \fi
}

\def\finishtitlepage{%
  \vskip4pt \hrule height 2pt width \hsize
  \vskip\titlepagebottomglue
  \finishedtitlepagetrue
}

% Settings used for typesetting titles: no hyphenation, no indentation,
% don't worry much about spacing, ragged right.  This should be used
% inside a \vbox, and fonts need to be set appropriately first.  Because
% it is always used for titles, nothing else, we call \rmisbold.  \par
% should be specified before the end of the \vbox, since a vbox is a group.
% 
\def\raggedtitlesettings{%
  \rmisbold
  \hyphenpenalty=10000
  \parindent=0pt
  \tolerance=5000
  \ptexraggedright
}

% Macros to be used within @titlepage:

\let\subtitlerm=\tenrm
\def\subtitlefont{\subtitlerm \normalbaselineskip = 13pt \normalbaselines}

\parseargdef\title{%
  \checkenv\titlepage
  \vbox{\titlefonts \raggedtitlesettings #1\par}%
  % print a rule at the page bottom also.
  \finishedtitlepagefalse
  \vskip4pt \hrule height 4pt width \hsize \vskip4pt
}

\parseargdef\subtitle{%
  \checkenv\titlepage
  {\subtitlefont \rightline{#1}}%
}

% @author should come last, but may come many times.
% It can also be used inside @quotation.
%
\parseargdef\author{%
  \def\temp{\quotation}%
  \ifx\thisenv\temp
    \def\quotationauthor{#1}% printed in \Equotation.
  \else
    \checkenv\titlepage
    \ifseenauthor\else \vskip 0pt plus 1filll \seenauthortrue \fi
    {\secfonts\rmisbold \leftline{#1}}%
  \fi
}


% Set up page headings and footings.

\let\thispage=\folio

\newtoks\evenheadline    % headline on even pages
\newtoks\oddheadline     % headline on odd pages
\newtoks\evenfootline    % footline on even pages
\newtoks\oddfootline     % footline on odd pages

% Now make TeX use those variables
\headline={{\textfonts\rm \ifodd\pageno \the\oddheadline
                            \else \the\evenheadline \fi}}
\footline={{\textfonts\rm \ifodd\pageno \the\oddfootline
                            \else \the\evenfootline \fi}\HEADINGShook}
\let\HEADINGShook=\relax

% Commands to set those variables.
% For example, this is what  @headings on  does
% @evenheading @thistitle|@thispage|@thischapter
% @oddheading @thischapter|@thispage|@thistitle
% @evenfooting @thisfile||
% @oddfooting ||@thisfile


\def\evenheading{\parsearg\evenheadingxxx}
\def\evenheadingxxx #1{\evenheadingyyy #1\|\|\|\|\finish}
\def\evenheadingyyy #1\|#2\|#3\|#4\finish{%
\global\evenheadline={\rlap{\centerline{#2}}\line{#1\hfil#3}}}

\def\oddheading{\parsearg\oddheadingxxx}
\def\oddheadingxxx #1{\oddheadingyyy #1\|\|\|\|\finish}
\def\oddheadingyyy #1\|#2\|#3\|#4\finish{%
\global\oddheadline={\rlap{\centerline{#2}}\line{#1\hfil#3}}}

\parseargdef\everyheading{\oddheadingxxx{#1}\evenheadingxxx{#1}}%

\def\evenfooting{\parsearg\evenfootingxxx}
\def\evenfootingxxx #1{\evenfootingyyy #1\|\|\|\|\finish}
\def\evenfootingyyy #1\|#2\|#3\|#4\finish{%
\global\evenfootline={\rlap{\centerline{#2}}\line{#1\hfil#3}}}

\def\oddfooting{\parsearg\oddfootingxxx}
\def\oddfootingxxx #1{\oddfootingyyy #1\|\|\|\|\finish}
\def\oddfootingyyy #1\|#2\|#3\|#4\finish{%
  \global\oddfootline = {\rlap{\centerline{#2}}\line{#1\hfil#3}}%
  %
  % Leave some space for the footline.  Hopefully ok to assume
  % @evenfooting will not be used by itself.
  \global\advance\pageheight by -12pt
  \global\advance\vsize by -12pt
}

\parseargdef\everyfooting{\oddfootingxxx{#1}\evenfootingxxx{#1}}

% @evenheadingmarks top     \thischapter <- chapter at the top of a page
% @evenheadingmarks bottom  \thischapter <- chapter at the bottom of a page
%
% The same set of arguments for:
%
% @oddheadingmarks
% @evenfootingmarks
% @oddfootingmarks
% @everyheadingmarks
% @everyfootingmarks

\def\evenheadingmarks{\headingmarks{even}{heading}}
\def\oddheadingmarks{\headingmarks{odd}{heading}}
\def\evenfootingmarks{\headingmarks{even}{footing}}
\def\oddfootingmarks{\headingmarks{odd}{footing}}
\def\everyheadingmarks#1 {\headingmarks{even}{heading}{#1}
                          \headingmarks{odd}{heading}{#1} }
\def\everyfootingmarks#1 {\headingmarks{even}{footing}{#1}
                          \headingmarks{odd}{footing}{#1} }
% #1 = even/odd, #2 = heading/footing, #3 = top/bottom.
\def\headingmarks#1#2#3 {%
  \expandafter\let\expandafter\temp \csname get#3headingmarks\endcsname
  \global\expandafter\let\csname get#1#2marks\endcsname \temp
}

\everyheadingmarks bottom
\everyfootingmarks bottom

% @headings double      turns headings on for double-sided printing.
% @headings single      turns headings on for single-sided printing.
% @headings off         turns them off.
% @headings on          same as @headings double, retained for compatibility.
% @headings after       turns on double-sided headings after this page.
% @headings doubleafter turns on double-sided headings after this page.
% @headings singleafter turns on single-sided headings after this page.
% By default, they are off at the start of a document,
% and turned `on' after @end titlepage.

\def\headings #1 {\csname HEADINGS#1\endcsname}

\def\headingsoff{% non-global headings elimination
  \evenheadline={\hfil}\evenfootline={\hfil}%
   \oddheadline={\hfil}\oddfootline={\hfil}%
}

\def\HEADINGSoff{{\globaldefs=1 \headingsoff}} % global setting
\HEADINGSoff  % it's the default

% When we turn headings on, set the page number to 1.
% For double-sided printing, put current file name in lower left corner,
% chapter name on inside top of right hand pages, document
% title on inside top of left hand pages, and page numbers on outside top
% edge of all pages.
\def\HEADINGSdouble{%
\global\pageno=1
\global\evenfootline={\hfil}
\global\oddfootline={\hfil}
\global\evenheadline={\line{\folio\hfil\thistitle}}
\global\oddheadline={\line{\thischapter\hfil\folio}}
\global\let\contentsalignmacro = \chapoddpage
}
\let\contentsalignmacro = \chappager

% For single-sided printing, chapter title goes across top left of page,
% page number on top right.
\def\HEADINGSsingle{%
\global\pageno=1
\global\evenfootline={\hfil}
\global\oddfootline={\hfil}
\global\evenheadline={\line{\thischapter\hfil\folio}}
\global\oddheadline={\line{\thischapter\hfil\folio}}
\global\let\contentsalignmacro = \chappager
}
\def\HEADINGSon{\HEADINGSdouble}

\def\HEADINGSafter{\let\HEADINGShook=\HEADINGSdoublex}
\let\HEADINGSdoubleafter=\HEADINGSafter
\def\HEADINGSdoublex{%
\global\evenfootline={\hfil}
\global\oddfootline={\hfil}
\global\evenheadline={\line{\folio\hfil\thistitle}}
\global\oddheadline={\line{\thischapter\hfil\folio}}
\global\let\contentsalignmacro = \chapoddpage
}

\def\HEADINGSsingleafter{\let\HEADINGShook=\HEADINGSsinglex}
\def\HEADINGSsinglex{%
\global\evenfootline={\hfil}
\global\oddfootline={\hfil}
\global\evenheadline={\line{\thischapter\hfil\folio}}
\global\oddheadline={\line{\thischapter\hfil\folio}}
\global\let\contentsalignmacro = \chappager
}

% Subroutines used in generating headings
% This produces Day Month Year style of output.
% Only define if not already defined, in case a txi-??.tex file has set
% up a different format (e.g., txi-cs.tex does this).
\ifx\today\thisisundefined
\def\today{%
  \number\day\space
  \ifcase\month
  \or\putwordMJan\or\putwordMFeb\or\putwordMMar\or\putwordMApr
  \or\putwordMMay\or\putwordMJun\or\putwordMJul\or\putwordMAug
  \or\putwordMSep\or\putwordMOct\or\putwordMNov\or\putwordMDec
  \fi
  \space\number\year}
\fi

% @settitle line...  specifies the title of the document, for headings.
% It generates no output of its own.
\def\thistitle{\putwordNoTitle}
\def\settitle{\parsearg{\gdef\thistitle}}


\message{tables,}
% Tables -- @table, @ftable, @vtable, @item(x).

% default indentation of table text
\newdimen\tableindent \tableindent=.8in
% default indentation of @itemize and @enumerate text
\newdimen\itemindent  \itemindent=.3in
% margin between end of table item and start of table text.
\newdimen\itemmargin  \itemmargin=.1in

% used internally for \itemindent minus \itemmargin
\newdimen\itemmax

% Note @table, @ftable, and @vtable define @item, @itemx, etc., with
% these defs.
% They also define \itemindex
% to index the item name in whatever manner is desired (perhaps none).

\newif\ifitemxneedsnegativevskip

\def\itemxpar{\par\ifitemxneedsnegativevskip\nobreak\vskip-\parskip\nobreak\fi}

\def\internalBitem{\smallbreak \parsearg\itemzzz}
\def\internalBitemx{\itemxpar \parsearg\itemzzz}

\def\itemzzz #1{\begingroup %
  \advance\hsize by -\rightskip
  \advance\hsize by -\tableindent
  \setbox0=\hbox{\itemindicate{#1}}%
  \itemindex{#1}%
  \nobreak % This prevents a break before @itemx.
  %
  % If the item text does not fit in the space we have, put it on a line
  % by itself, and do not allow a page break either before or after that
  % line.  We do not start a paragraph here because then if the next
  % command is, e.g., @kindex, the whatsit would get put into the
  % horizontal list on a line by itself, resulting in extra blank space.
  \ifdim \wd0>\itemmax
    %
    % Make this a paragraph so we get the \parskip glue and wrapping,
    % but leave it ragged-right.
    \begingroup
      \advance\leftskip by-\tableindent
      \advance\hsize by\tableindent
      \advance\rightskip by0pt plus1fil\relax
      \leavevmode\unhbox0\par
    \endgroup
    %
    % We're going to be starting a paragraph, but we don't want the
    % \parskip glue -- logically it's part of the @item we just started.
    \nobreak \vskip-\parskip
    %
    % Stop a page break at the \parskip glue coming up.  However, if
    % what follows is an environment such as @example, there will be no
    % \parskip glue; then the negative vskip we just inserted would
    % cause the example and the item to crash together.  So we use this
    % bizarre value of 10001 as a signal to \aboveenvbreak to insert
    % \parskip glue after all.  Section titles are handled this way also.
    %
    \penalty 10001
    \endgroup
    \itemxneedsnegativevskipfalse
  \else
    % The item text fits into the space.  Start a paragraph, so that the
    % following text (if any) will end up on the same line.
    \noindent
    % Do this with kerns and \unhbox so that if there is a footnote in
    % the item text, it can migrate to the main vertical list and
    % eventually be printed.
    \nobreak\kern-\tableindent
    \dimen0 = \itemmax  \advance\dimen0 by \itemmargin \advance\dimen0 by -\wd0
    \unhbox0
    \nobreak\kern\dimen0
    \endgroup
    \itemxneedsnegativevskiptrue
  \fi
}

\def\item{\errmessage{@item while not in a list environment}}
\def\itemx{\errmessage{@itemx while not in a list environment}}

% @table, @ftable, @vtable.
\envdef\table{%
  \let\itemindex\gobble
  \tablecheck{table}%
}
\envdef\ftable{%
  \def\itemindex ##1{\doind {fn}{\code{##1}}}%
  \tablecheck{ftable}%
}
\envdef\vtable{%
  \def\itemindex ##1{\doind {vr}{\code{##1}}}%
  \tablecheck{vtable}%
}
\def\tablecheck#1{%
  \ifnum \the\catcode`\^^M=\active
    \endgroup
    \errmessage{This command won't work in this context; perhaps the problem is
      that we are \inenvironment\thisenv}%
    \def\next{\doignore{#1}}%
  \else
    \let\next\tablex
  \fi
  \next
}
\def\tablex#1{%
  \def\itemindicate{#1}%
  \parsearg\tabley
}
\def\tabley#1{%
  {%
    \makevalueexpandable
    \edef\temp{\noexpand\tablez #1\space\space\space}%
    \expandafter
  }\temp \endtablez
}
\def\tablez #1 #2 #3 #4\endtablez{%
  \aboveenvbreak
  \ifnum 0#1>0 \advance \leftskip by #1\mil \fi
  \ifnum 0#2>0 \tableindent=#2\mil \fi
  \ifnum 0#3>0 \advance \rightskip by #3\mil \fi
  \itemmax=\tableindent
  \advance \itemmax by -\itemmargin
  \advance \leftskip by \tableindent
  \exdentamount=\tableindent
  \parindent = 0pt
  \parskip = \smallskipamount
  \ifdim \parskip=0pt \parskip=2pt \fi
  \let\item = \internalBitem
  \let\itemx = \internalBitemx
}
\def\Etable{\endgraf\afterenvbreak}
\let\Eftable\Etable
\let\Evtable\Etable
\let\Eitemize\Etable
\let\Eenumerate\Etable

% This is the counter used by @enumerate, which is really @itemize

\newcount \itemno

\envdef\itemize{\parsearg\doitemize}

\def\doitemize#1{%
  \aboveenvbreak
  \itemmax=\itemindent
  \advance\itemmax by -\itemmargin
  \advance\leftskip by \itemindent
  \exdentamount=\itemindent
  \parindent=0pt
  \parskip=\smallskipamount
  \ifdim\parskip=0pt \parskip=2pt \fi
  %
  % Try typesetting the item mark that if the document erroneously says
  % something like @itemize @samp (intending @table), there's an error
  % right away at the @itemize.  It's not the best error message in the
  % world, but it's better than leaving it to the @item.  This means if
  % the user wants an empty mark, they have to say @w{} not just @w.
  \def\itemcontents{#1}%
  \setbox0 = \hbox{\itemcontents}%
  %
  % @itemize with no arg is equivalent to @itemize @bullet.
  \ifx\itemcontents\empty\def\itemcontents{\bullet}\fi
  %
  \let\item=\itemizeitem
}

% Definition of @item while inside @itemize and @enumerate.
%
\def\itemizeitem{%
  \advance\itemno by 1  % for enumerations
  {\let\par=\endgraf \smallbreak}% reasonable place to break
  {%
   % If the document has an @itemize directly after a section title, a
   % \nobreak will be last on the list, and \sectionheading will have
   % done a \vskip-\parskip.  In that case, we don't want to zero
   % parskip, or the item text will crash with the heading.  On the
   % other hand, when there is normal text preceding the item (as there
   % usually is), we do want to zero parskip, or there would be too much
   % space.  In that case, we won't have a \nobreak before.  At least
   % that's the theory.
   \ifnum\lastpenalty<10000 \parskip=0in \fi
   \noindent
   \hbox to 0pt{\hss \itemcontents \kern\itemmargin}%
   %
   \vadjust{\penalty 1200}}% not good to break after first line of item.
  \flushcr
}

% \splitoff TOKENS\endmark defines \first to be the first token in
% TOKENS, and \rest to be the remainder.
%
\def\splitoff#1#2\endmark{\def\first{#1}\def\rest{#2}}%

% Allow an optional argument of an uppercase letter, lowercase letter,
% or number, to specify the first label in the enumerated list.  No
% argument is the same as `1'.
%
\envparseargdef\enumerate{\enumeratey #1  \endenumeratey}
\def\enumeratey #1 #2\endenumeratey{%
  % If we were given no argument, pretend we were given `1'.
  \def\thearg{#1}%
  \ifx\thearg\empty \def\thearg{1}\fi
  %
  % Detect if the argument is a single token.  If so, it might be a
  % letter.  Otherwise, the only valid thing it can be is a number.
  % (We will always have one token, because of the test we just made.
  % This is a good thing, since \splitoff doesn't work given nothing at
  % all -- the first parameter is undelimited.)
  \expandafter\splitoff\thearg\endmark
  \ifx\rest\empty
    % Only one token in the argument.  It could still be anything.
    % A ``lowercase letter'' is one whose \lccode is nonzero.
    % An ``uppercase letter'' is one whose \lccode is both nonzero, and
    %   not equal to itself.
    % Otherwise, we assume it's a number.
    %
    % We need the \relax at the end of the \ifnum lines to stop TeX from
    % continuing to look for a <number>.
    %
    \ifnum\lccode\expandafter`\thearg=0\relax
      \numericenumerate % a number (we hope)
    \else
      % It's a letter.
      \ifnum\lccode\expandafter`\thearg=\expandafter`\thearg\relax
        \lowercaseenumerate % lowercase letter
      \else
        \uppercaseenumerate % uppercase letter
      \fi
    \fi
  \else
    % Multiple tokens in the argument.  We hope it's a number.
    \numericenumerate
  \fi
}

% An @enumerate whose labels are integers.  The starting integer is
% given in \thearg.
%
\def\numericenumerate{%
  \itemno = \thearg
  \startenumeration{\the\itemno}%
}

% The starting (lowercase) letter is in \thearg.
\def\lowercaseenumerate{%
  \itemno = \expandafter`\thearg
  \startenumeration{%
    % Be sure we're not beyond the end of the alphabet.
    \ifnum\itemno=0
      \errmessage{No more lowercase letters in @enumerate; get a bigger
                  alphabet}%
    \fi
    \char\lccode\itemno
  }%
}

% The starting (uppercase) letter is in \thearg.
\def\uppercaseenumerate{%
  \itemno = \expandafter`\thearg
  \startenumeration{%
    % Be sure we're not beyond the end of the alphabet.
    \ifnum\itemno=0
      \errmessage{No more uppercase letters in @enumerate; get a bigger
                  alphabet}
    \fi
    \char\uccode\itemno
  }%
}

% Call \doitemize, adding a period to the first argument and supplying the
% common last two arguments.  Also subtract one from the initial value in
% \itemno, since @item increments \itemno.
%
\def\startenumeration#1{%
  \advance\itemno by -1
  \doitemize{#1.}\flushcr
}

% @alphaenumerate and @capsenumerate are abbreviations for giving an arg
% to @enumerate.
%
\def\alphaenumerate{\enumerate{a}}
\def\capsenumerate{\enumerate{A}}
\def\Ealphaenumerate{\Eenumerate}
\def\Ecapsenumerate{\Eenumerate}


% @multitable macros
% Amy Hendrickson, 8/18/94, 3/6/96
%
% @multitable ... @end multitable will make as many columns as desired.
% Contents of each column will wrap at width given in preamble.  Width
% can be specified either with sample text given in a template line,
% or in percent of \hsize, the current width of text on page.

% Table can continue over pages but will only break between lines.

% To make preamble:
%
% Either define widths of columns in terms of percent of \hsize:
%   @multitable @columnfractions .25 .3 .45
%   @item ...
%
%   Numbers following @columnfractions are the percent of the total
%   current hsize to be used for each column. You may use as many
%   columns as desired.


% Or use a template:
%   @multitable {Column 1 template} {Column 2 template} {Column 3 template}
%   @item ...
%   using the widest term desired in each column.

% Each new table line starts with @item, each subsequent new column
% starts with @tab. Empty columns may be produced by supplying @tab's
% with nothing between them for as many times as empty columns are needed,
% ie, @tab@tab@tab will produce two empty columns.

% @item, @tab do not need to be on their own lines, but it will not hurt
% if they are.

% Sample multitable:

%   @multitable {Column 1 template} {Column 2 template} {Column 3 template}
%   @item first col stuff @tab second col stuff @tab third col
%   @item
%   first col stuff
%   @tab
%   second col stuff
%   @tab
%   third col
%   @item first col stuff @tab second col stuff
%   @tab Many paragraphs of text may be used in any column.
%
%         They will wrap at the width determined by the template.
%   @item@tab@tab This will be in third column.
%   @end multitable

% Default dimensions may be reset by user.
% @multitableparskip is vertical space between paragraphs in table.
% @multitableparindent is paragraph indent in table.
% @multitablecolmargin is horizontal space to be left between columns.
% @multitablelinespace is space to leave between table items, baseline
%                                                            to baseline.
%   0pt means it depends on current normal line spacing.
%
\newskip\multitableparskip
\newskip\multitableparindent
\newdimen\multitablecolspace
\newskip\multitablelinespace
\multitableparskip=0pt
\multitableparindent=6pt
\multitablecolspace=12pt
\multitablelinespace=0pt

% Macros used to set up halign preamble:
%
\let\endsetuptable\relax
\def\xendsetuptable{\endsetuptable}
\let\columnfractions\relax
\def\xcolumnfractions{\columnfractions}
\newif\ifsetpercent

% #1 is the @columnfraction, usually a decimal number like .5, but might
% be just 1.  We just use it, whatever it is.
%
\def\pickupwholefraction#1 {%
  \global\advance\colcount by 1
  \expandafter\xdef\csname col\the\colcount\endcsname{#1\hsize}%
  \setuptable
}

\newcount\colcount
\def\setuptable#1{%
  \def\firstarg{#1}%
  \ifx\firstarg\xendsetuptable
    \let\go = \relax
  \else
    \ifx\firstarg\xcolumnfractions
      \global\setpercenttrue
    \else
      \ifsetpercent
         \let\go\pickupwholefraction
      \else
         \global\advance\colcount by 1
         \setbox0=\hbox{#1\unskip\space}% Add a normal word space as a
                   % separator; typically that is always in the input, anyway.
         \expandafter\xdef\csname col\the\colcount\endcsname{\the\wd0}%
      \fi
    \fi
    \ifx\go\pickupwholefraction
      % Put the argument back for the \pickupwholefraction call, so
      % we'll always have a period there to be parsed.
      \def\go{\pickupwholefraction#1}%
    \else
      \let\go = \setuptable
    \fi%
  \fi
  \go
}

% multitable-only commands.
%
% @headitem starts a heading row, which we typeset in bold.
% Assignments have to be global since we are inside the implicit group
% of an alignment entry.  \everycr resets \everytab so we don't have to
% undo it ourselves.
\def\headitemfont{\b}% for people to use in the template row; not changeable
\def\headitem{%
  \checkenv\multitable
  \crcr
  \global\everytab={\bf}% can't use \headitemfont since the parsing differs
  \the\everytab % for the first item
}%
%
% A \tab used to include \hskip1sp.  But then the space in a template
% line is not enough.  That is bad.  So let's go back to just `&' until
% we again encounter the problem the 1sp was intended to solve.
%					--karl, nathan@acm.org, 20apr99.
\def\tab{\checkenv\multitable &\the\everytab}%

% @multitable ... @end multitable definitions:
%
\newtoks\everytab  % insert after every tab.
%
\envdef\multitable{%
  \vskip\parskip
  \startsavinginserts
  %
  % @item within a multitable starts a normal row.
  % We use \def instead of \let so that if one of the multitable entries
  % contains an @itemize, we don't choke on the \item (seen as \crcr aka
  % \endtemplate) expanding \doitemize.
  \def\item{\crcr}%
  %
  \tolerance=9500
  \hbadness=9500
  \setmultitablespacing
  \parskip=\multitableparskip
  \parindent=\multitableparindent
  \overfullrule=0pt
  \global\colcount=0
  %
  \everycr = {%
    \noalign{%
      \global\everytab={}%
      \global\colcount=0 % Reset the column counter.
      % Check for saved footnotes, etc.
      \checkinserts
      % Keeps underfull box messages off when table breaks over pages.
      %\filbreak
	% Maybe so, but it also creates really weird page breaks when the
	% table breaks over pages. Wouldn't \vfil be better?  Wait until the
	% problem manifests itself, so it can be fixed for real --karl.
    }%
  }%
  %
  \parsearg\domultitable
}
\def\domultitable#1{%
  % To parse everything between @multitable and @item:
  \setuptable#1 \endsetuptable
  %
  % This preamble sets up a generic column definition, which will
  % be used as many times as user calls for columns.
  % \vtop will set a single line and will also let text wrap and
  % continue for many paragraphs if desired.
  \halign\bgroup &%
    \global\advance\colcount by 1
    \multistrut
    \vtop{%
      % Use the current \colcount to find the correct column width:
      \hsize=\expandafter\csname col\the\colcount\endcsname
      %
      % In order to keep entries from bumping into each other
      % we will add a \leftskip of \multitablecolspace to all columns after
      % the first one.
      %
      % If a template has been used, we will add \multitablecolspace
      % to the width of each template entry.
      %
      % If the user has set preamble in terms of percent of \hsize we will
      % use that dimension as the width of the column, and the \leftskip
      % will keep entries from bumping into each other.  Table will start at
      % left margin and final column will justify at right margin.
      %
      % Make sure we don't inherit \rightskip from the outer environment.
      \rightskip=0pt
      \ifnum\colcount=1
	% The first column will be indented with the surrounding text.
	\advance\hsize by\leftskip
      \else
	\ifsetpercent \else
	  % If user has not set preamble in terms of percent of \hsize
	  % we will advance \hsize by \multitablecolspace.
	  \advance\hsize by \multitablecolspace
	\fi
       % In either case we will make \leftskip=\multitablecolspace:
      \leftskip=\multitablecolspace
      \fi
      % Ignoring space at the beginning and end avoids an occasional spurious
      % blank line, when TeX decides to break the line at the space before the
      % box from the multistrut, so the strut ends up on a line by itself.
      % For example:
      % @multitable @columnfractions .11 .89
      % @item @code{#}
      % @tab Legal holiday which is valid in major parts of the whole country.
      % Is automatically provided with highlighting sequences respectively
      % marking characters.
      \noindent\ignorespaces##\unskip\multistrut
    }\cr
}
\def\Emultitable{%
  \crcr
  \egroup % end the \halign
  \global\setpercentfalse
}

\def\setmultitablespacing{%
  \def\multistrut{\strut}% just use the standard line spacing
  %
  % Compute \multitablelinespace (if not defined by user) for use in
  % \multitableparskip calculation.  We used define \multistrut based on
  % this, but (ironically) that caused the spacing to be off.
  % See bug-texinfo report from Werner Lemberg, 31 Oct 2004 12:52:20 +0100.
\ifdim\multitablelinespace=0pt
\setbox0=\vbox{X}\global\multitablelinespace=\the\baselineskip
\global\advance\multitablelinespace by-\ht0
\fi
% Test to see if parskip is larger than space between lines of
% table. If not, do nothing.
%        If so, set to same dimension as multitablelinespace.
\ifdim\multitableparskip>\multitablelinespace
\global\multitableparskip=\multitablelinespace
\global\advance\multitableparskip-7pt % to keep parskip somewhat smaller
                                      % than skip between lines in the table.
\fi%
\ifdim\multitableparskip=0pt
\global\multitableparskip=\multitablelinespace
\global\advance\multitableparskip-7pt % to keep parskip somewhat smaller
                                      % than skip between lines in the table.
\fi}


\message{conditionals,}

% @iftex, @ifnotdocbook, @ifnothtml, @ifnotinfo, @ifnotplaintext,
% @ifnotxml always succeed.  They currently do nothing; we don't
% attempt to check whether the conditionals are properly nested.  But we
% have to remember that they are conditionals, so that @end doesn't
% attempt to close an environment group.
%
\def\makecond#1{%
  \expandafter\let\csname #1\endcsname = \relax
  \expandafter\let\csname iscond.#1\endcsname = 1
}
\makecond{iftex}
\makecond{ifnotdocbook}
\makecond{ifnothtml}
\makecond{ifnotinfo}
\makecond{ifnotplaintext}
\makecond{ifnotxml}

% Ignore @ignore, @ifhtml, @ifinfo, and the like.
%
\def\direntry{\doignore{direntry}}
\def\documentdescription{\doignore{documentdescription}}
\def\docbook{\doignore{docbook}}
\def\html{\doignore{html}}
\def\ifdocbook{\doignore{ifdocbook}}
\def\ifhtml{\doignore{ifhtml}}
\def\ifinfo{\doignore{ifinfo}}
\def\ifnottex{\doignore{ifnottex}}
\def\ifplaintext{\doignore{ifplaintext}}
\def\ifxml{\doignore{ifxml}}
\def\ignore{\doignore{ignore}}
\def\menu{\doignore{menu}}
\def\xml{\doignore{xml}}

% Ignore text until a line `@end #1', keeping track of nested conditionals.
%
% A count to remember the depth of nesting.
\newcount\doignorecount

\def\doignore#1{\begingroup
  % Scan in ``verbatim'' mode:
  \obeylines
  \catcode`\@ = \other
  \catcode`\{ = \other
  \catcode`\} = \other
  %
  % Make sure that spaces turn into tokens that match what \doignoretext wants.
  \spaceisspace
  %
  % Count number of #1's that we've seen.
  \doignorecount = 0
  %
  % Swallow text until we reach the matching `@end #1'.
  \dodoignore{#1}%
}

{ \catcode`_=11 % We want to use \_STOP_ which cannot appear in texinfo source.
  \obeylines %
  %
  \gdef\dodoignore#1{%
    % #1 contains the command name as a string, e.g., `ifinfo'.
    %
    % Define a command to find the next `@end #1'.
    \long\def\doignoretext##1^^M@end #1{%
      \doignoretextyyy##1^^M@#1\_STOP_}%
    %
    % And this command to find another #1 command, at the beginning of a
    % line.  (Otherwise, we would consider a line `@c @ifset', for
    % example, to count as an @ifset for nesting.)
    \long\def\doignoretextyyy##1^^M@#1##2\_STOP_{\doignoreyyy{##2}\_STOP_}%
    %
    % And now expand that command.
    \doignoretext ^^M%
  }%
}

\def\doignoreyyy#1{%
  \def\temp{#1}%
  \ifx\temp\empty			% Nothing found.
    \let\next\doignoretextzzz
  \else					% Found a nested condition, ...
    \advance\doignorecount by 1
    \let\next\doignoretextyyy		% ..., look for another.
    % If we're here, #1 ends with ^^M\ifinfo (for example).
  \fi
  \next #1% the token \_STOP_ is present just after this macro.
}

% We have to swallow the remaining "\_STOP_".
%
\def\doignoretextzzz#1{%
  \ifnum\doignorecount = 0	% We have just found the outermost @end.
    \let\next\enddoignore
  \else				% Still inside a nested condition.
    \advance\doignorecount by -1
    \let\next\doignoretext      % Look for the next @end.
  \fi
  \next
}

% Finish off ignored text.
{ \obeylines%
  % Ignore anything after the last `@end #1'; this matters in verbatim
  % environments, where otherwise the newline after an ignored conditional
  % would result in a blank line in the output.
  \gdef\enddoignore#1^^M{\endgroup\ignorespaces}%
}


% @set VAR sets the variable VAR to an empty value.
% @set VAR REST-OF-LINE sets VAR to the value REST-OF-LINE.
%
% Since we want to separate VAR from REST-OF-LINE (which might be
% empty), we can't just use \parsearg; we have to insert a space of our
% own to delimit the rest of the line, and then take it out again if we
% didn't need it.
% We rely on the fact that \parsearg sets \catcode`\ =10.
%
\parseargdef\set{\setyyy#1 \endsetyyy}
\def\setyyy#1 #2\endsetyyy{%
  {%
    \makevalueexpandable
    \def\temp{#2}%
    \edef\next{\gdef\makecsname{SET#1}}%
    \ifx\temp\empty
      \next{}%
    \else
      \setzzz#2\endsetzzz
    \fi
  }%
}
% Remove the trailing space \setxxx inserted.
\def\setzzz#1 \endsetzzz{\next{#1}}

% @clear VAR clears (i.e., unsets) the variable VAR.
%
\parseargdef\clear{%
  {%
    \makevalueexpandable
    \global\expandafter\let\csname SET#1\endcsname=\relax
  }%
}

% @value{foo} gets the text saved in variable foo.
\def\value{\begingroup\makevalueexpandable\valuexxx}
\def\valuexxx#1{\expandablevalue{#1}\endgroup}
{
  \catcode`\- = \active \catcode`\_ = \active
  %
  \gdef\makevalueexpandable{%
    \let\value = \expandablevalue
    % We don't want these characters active, ...
    \catcode`\-=\other \catcode`\_=\other
    % ..., but we might end up with active ones in the argument if
    % we're called from @code, as @code{@value{foo-bar_}}, though.
    % So \let them to their normal equivalents.
    \let-\realdash \let_\normalunderscore
  }
}

% We have this subroutine so that we can handle at least some @value's
% properly in indexes (we call \makevalueexpandable in \indexdummies).
% The command has to be fully expandable (if the variable is set), since
% the result winds up in the index file.  This means that if the
% variable's value contains other Texinfo commands, it's almost certain
% it will fail (although perhaps we could fix that with sufficient work
% to do a one-level expansion on the result, instead of complete).
%
\def\expandablevalue#1{%
  \expandafter\ifx\csname SET#1\endcsname\relax
    {[No value for ``#1'']}%
    \message{Variable `#1', used in @value, is not set.}%
  \else
    \csname SET#1\endcsname
  \fi
}

% @ifset VAR ... @end ifset reads the `...' iff VAR has been defined
% with @set.
%
% To get special treatment of `@end ifset,' call \makeond and the redefine.
%
\makecond{ifset}
\def\ifset{\parsearg{\doifset{\let\next=\ifsetfail}}}
\def\doifset#1#2{%
  {%
    \makevalueexpandable
    \let\next=\empty
    \expandafter\ifx\csname SET#2\endcsname\relax
      #1% If not set, redefine \next.
    \fi
    \expandafter
  }\next
}
\def\ifsetfail{\doignore{ifset}}

% @ifclear VAR ... @end executes the `...' iff VAR has never been
% defined with @set, or has been undefined with @clear.
%
% The `\else' inside the `\doifset' parameter is a trick to reuse the
% above code: if the variable is not set, do nothing, if it is set,
% then redefine \next to \ifclearfail.
%
\makecond{ifclear}
\def\ifclear{\parsearg{\doifset{\else \let\next=\ifclearfail}}}
\def\ifclearfail{\doignore{ifclear}}

% @ifcommandisdefined CMD ... @end executes the `...' if CMD (written
% without the @) is in fact defined.  We can only feasibly check at the
% TeX level, so something like `mathcode' is going to considered
% defined even though it is not a Texinfo command.
% 
\makecond{ifcommanddefined}
\def\ifcommanddefined{\parsearg{\doifcmddefined{\let\next=\ifcmddefinedfail}}}
%
\def\doifcmddefined#1#2{{%
    \makevalueexpandable
    \let\next=\empty
    \expandafter\ifx\csname #2\endcsname\relax
      #1% If not defined, \let\next as above.
    \fi
    \expandafter
  }\next
}
\def\ifcmddefinedfail{\doignore{ifcommanddefined}}

% @ifcommandnotdefined CMD ... handled similar to @ifclear above.
\makecond{ifcommandnotdefined}
\def\ifcommandnotdefined{%
  \parsearg{\doifcmddefined{\else \let\next=\ifcmdnotdefinedfail}}}
\def\ifcmdnotdefinedfail{\doignore{ifcommandnotdefined}}

% Set the `txicommandconditionals' variable, so documents have a way to
% test if the @ifcommand...defined conditionals are available.
\set txicommandconditionals

% @dircategory CATEGORY  -- specify a category of the dir file
% which this file should belong to.  Ignore this in TeX.
\let\dircategory=\comment

% @defininfoenclose.
\let\definfoenclose=\comment


\message{indexing,}
% Index generation facilities

% Define \newwrite to be identical to plain tex's \newwrite
% except not \outer, so it can be used within macros and \if's.
\edef\newwrite{\makecsname{ptexnewwrite}}

% \newindex {foo} defines an index named foo.
% It automatically defines \fooindex such that
% \fooindex ...rest of line... puts an entry in the index foo.
% It also defines \fooindfile to be the number of the output channel for
% the file that accumulates this index.  The file's extension is foo.
% The name of an index should be no more than 2 characters long
% for the sake of vms.
%
\def\newindex#1{%
  \iflinks
    \expandafter\newwrite \csname#1indfile\endcsname
    \openout \csname#1indfile\endcsname \jobname.#1 % Open the file
  \fi
  \expandafter\xdef\csname#1index\endcsname{%     % Define @#1index
    \noexpand\doindex{#1}}
}

% @defindex foo  ==  \newindex{foo}
%
\def\defindex{\parsearg\newindex}

% Define @defcodeindex, like @defindex except put all entries in @code.
%
\def\defcodeindex{\parsearg\newcodeindex}
%
\def\newcodeindex#1{%
  \iflinks
    \expandafter\newwrite \csname#1indfile\endcsname
    \openout \csname#1indfile\endcsname \jobname.#1
  \fi
  \expandafter\xdef\csname#1index\endcsname{%
    \noexpand\docodeindex{#1}}%
}


% @synindex foo bar    makes index foo feed into index bar.
% Do this instead of @defindex foo if you don't want it as a separate index.
%
% @syncodeindex foo bar   similar, but put all entries made for index foo
% inside @code.
%
\def\synindex#1 #2 {\dosynindex\doindex{#1}{#2}}
\def\syncodeindex#1 #2 {\dosynindex\docodeindex{#1}{#2}}

% #1 is \doindex or \docodeindex, #2 the index getting redefined (foo),
% #3 the target index (bar).
\def\dosynindex#1#2#3{%
  % Only do \closeout if we haven't already done it, else we'll end up
  % closing the target index.
  \expandafter \ifx\csname donesynindex#2\endcsname \relax
    % The \closeout helps reduce unnecessary open files; the limit on the
    % Acorn RISC OS is a mere 16 files.
    \expandafter\closeout\csname#2indfile\endcsname
    \expandafter\let\csname donesynindex#2\endcsname = 1
  \fi
  % redefine \fooindfile:
  \expandafter\let\expandafter\temp\expandafter=\csname#3indfile\endcsname
  \expandafter\let\csname#2indfile\endcsname=\temp
  % redefine \fooindex:
  \expandafter\xdef\csname#2index\endcsname{\noexpand#1{#3}}%
}

% Define \doindex, the driver for all \fooindex macros.
% Argument #1 is generated by the calling \fooindex macro,
%  and it is "foo", the name of the index.

% \doindex just uses \parsearg; it calls \doind for the actual work.
% This is because \doind is more useful to call from other macros.

% There is also \dosubind {index}{topic}{subtopic}
% which makes an entry in a two-level index such as the operation index.

\def\doindex#1{\edef\indexname{#1}\parsearg\singleindexer}
\def\singleindexer #1{\doind{\indexname}{#1}}

% like the previous two, but they put @code around the argument.
\def\docodeindex#1{\edef\indexname{#1}\parsearg\singlecodeindexer}
\def\singlecodeindexer #1{\doind{\indexname}{\code{#1}}}

% Take care of Texinfo commands that can appear in an index entry.
% Since there are some commands we want to expand, and others we don't,
% we have to laboriously prevent expansion for those that we don't.
%
\def\indexdummies{%
  \escapechar = `\\     % use backslash in output files.
  \def\@{@}% change to @@ when we switch to @ as escape char in index files.
  \def\ {\realbackslash\space }%
  %
  % Need these unexpandable (because we define \tt as a dummy)
  % definitions when @{ or @} appear in index entry text.  Also, more
  % complicated, when \tex is in effect and \{ is a \delimiter again.
  % We can't use \lbracecmd and \rbracecmd because texindex assumes
  % braces and backslashes are used only as delimiters.  Perhaps we
  % should define @lbrace and @rbrace commands a la @comma.
  \def\{{{\tt\char123}}%
  \def\}{{\tt\char125}}%
  %
  % I don't entirely understand this, but when an index entry is
  % generated from a macro call, the \endinput which \scanmacro inserts
  % causes processing to be prematurely terminated.  This is,
  % apparently, because \indexsorttmp is fully expanded, and \endinput
  % is an expandable command.  The redefinition below makes \endinput
  % disappear altogether for that purpose -- although logging shows that
  % processing continues to some further point.  On the other hand, it
  % seems \endinput does not hurt in the printed index arg, since that
  % is still getting written without apparent harm.
  %
  % Sample source (mac-idx3.tex, reported by Graham Percival to
  % help-texinfo, 22may06):
  % @macro funindex {WORD}
  % @findex xyz
  % @end macro
  % ...
  % @funindex commtest
  %
  % The above is not enough to reproduce the bug, but it gives the flavor.
  %
  % Sample whatsit resulting:
  % .@write3{\entry{xyz}{@folio }{@code {xyz@endinput }}}
  %
  % So:
  \let\endinput = \empty
  %
  % Do the redefinitions.
  \commondummies
}

% For the aux and toc files, @ is the escape character.  So we want to
% redefine everything using @ as the escape character (instead of
% \realbackslash, still used for index files).  When everything uses @,
% this will be simpler.
%
\def\atdummies{%
  \def\@{@@}%
  \def\ {@ }%
  \let\{ = \lbraceatcmd
  \let\} = \rbraceatcmd
  %
  % Do the redefinitions.
  \commondummies
  \otherbackslash
}

% Called from \indexdummies and \atdummies.
%
\def\commondummies{%
  %
  % \definedummyword defines \#1 as \string\#1\space, thus effectively
  % preventing its expansion.  This is used only for control words,
  % not control letters, because the \space would be incorrect for
  % control characters, but is needed to separate the control word
  % from whatever follows.
  %
  % For control letters, we have \definedummyletter, which omits the
  % space.
  %
  % These can be used both for control words that take an argument and
  % those that do not.  If it is followed by {arg} in the input, then
  % that will dutifully get written to the index (or wherever).
  %
  \def\definedummyword  ##1{\def##1{\string##1\space}}%
  \def\definedummyletter##1{\def##1{\string##1}}%
  \let\definedummyaccent\definedummyletter
  %
  \commondummiesnofonts
  %
  \definedummyletter\_%
  \definedummyletter\-%
  %
  % Non-English letters.
  \definedummyword\AA
  \definedummyword\AE
  \definedummyword\DH
  \definedummyword\L
  \definedummyword\O
  \definedummyword\OE
  \definedummyword\TH
  \definedummyword\aa
  \definedummyword\ae
  \definedummyword\dh
  \definedummyword\exclamdown
  \definedummyword\l
  \definedummyword\o
  \definedummyword\oe
  \definedummyword\ordf
  \definedummyword\ordm
  \definedummyword\questiondown
  \definedummyword\ss
  \definedummyword\th
  %
  % Although these internal commands shouldn't show up, sometimes they do.
  \definedummyword\bf
  \definedummyword\gtr
  \definedummyword\hat
  \definedummyword\less
  \definedummyword\sf
  \definedummyword\sl
  \definedummyword\tclose
  \definedummyword\tt
  %
  \definedummyword\LaTeX
  \definedummyword\TeX
  %
  % Assorted special characters.
  \definedummyword\arrow
  \definedummyword\bullet
  \definedummyword\comma
  \definedummyword\copyright
  \definedummyword\registeredsymbol
  \definedummyword\dots
  \definedummyword\enddots
  \definedummyword\entrybreak
  \definedummyword\equiv
  \definedummyword\error
  \definedummyword\euro
  \definedummyword\expansion
  \definedummyword\geq
  \definedummyword\guillemetleft
  \definedummyword\guillemetright
  \definedummyword\guilsinglleft
  \definedummyword\guilsinglright
  \definedummyword\lbracechar
  \definedummyword\leq
  \definedummyword\minus
  \definedummyword\ogonek
  \definedummyword\pounds
  \definedummyword\point
  \definedummyword\print
  \definedummyword\quotedblbase
  \definedummyword\quotedblleft
  \definedummyword\quotedblright
  \definedummyword\quoteleft
  \definedummyword\quoteright
  \definedummyword\quotesinglbase
  \definedummyword\rbracechar
  \definedummyword\result
  \definedummyword\textdegree
  %
  % We want to disable all macros so that they are not expanded by \write.
  \macrolist
  %
  \normalturnoffactive
  %
  % Handle some cases of @value -- where it does not contain any
  % (non-fully-expandable) commands.
  \makevalueexpandable
}

% \commondummiesnofonts: common to \commondummies and \indexnofonts.
%
\def\commondummiesnofonts{%
  % Control letters and accents.
  \definedummyletter\!%
  \definedummyaccent\"%
  \definedummyaccent\'%
  \definedummyletter\*%
  \definedummyaccent\,%
  \definedummyletter\.%
  \definedummyletter\/%
  \definedummyletter\:%
  \definedummyaccent\=%
  \definedummyletter\?%
  \definedummyaccent\^%
  \definedummyaccent\`%
  \definedummyaccent\~%
  \definedummyword\u
  \definedummyword\v
  \definedummyword\H
  \definedummyword\dotaccent
  \definedummyword\ogonek
  \definedummyword\ringaccent
  \definedummyword\tieaccent
  \definedummyword\ubaraccent
  \definedummyword\udotaccent
  \definedummyword\dotless
  %
  % Texinfo font commands.
  \definedummyword\b
  \definedummyword\i
  \definedummyword\r
  \definedummyword\sansserif
  \definedummyword\sc
  \definedummyword\slanted
  \definedummyword\t
  %
  % Commands that take arguments.
  \definedummyword\abbr
  \definedummyword\acronym
  \definedummyword\anchor
  \definedummyword\cite
  \definedummyword\code
  \definedummyword\command
  \definedummyword\dfn
  \definedummyword\dmn
  \definedummyword\email
  \definedummyword\emph
  \definedummyword\env
  \definedummyword\file
  \definedummyword\image
  \definedummyword\indicateurl
  \definedummyword\inforef
  \definedummyword\kbd
  \definedummyword\key
  \definedummyword\math
  \definedummyword\option
  \definedummyword\pxref
  \definedummyword\ref
  \definedummyword\samp
  \definedummyword\strong
  \definedummyword\tie
  \definedummyword\uref
  \definedummyword\url
  \definedummyword\var
  \definedummyword\verb
  \definedummyword\w
  \definedummyword\xref
}

% \indexnofonts is used when outputting the strings to sort the index
% by, and when constructing control sequence names.  It eliminates all
% control sequences and just writes whatever the best ASCII sort string
% would be for a given command (usually its argument).
%
\def\indexnofonts{%
  % Accent commands should become @asis.
  \def\definedummyaccent##1{\let##1\asis}%
  % We can just ignore other control letters.
  \def\definedummyletter##1{\let##1\empty}%
  % All control words become @asis by default; overrides below.
  \let\definedummyword\definedummyaccent
  %
  \commondummiesnofonts
  %
  % Don't no-op \tt, since it isn't a user-level command
  % and is used in the definitions of the active chars like <, >, |, etc.
  % Likewise with the other plain tex font commands.
  %\let\tt=\asis
  %
  \def\ { }%
  \def\@{@}%
  \def\_{\normalunderscore}%
  \def\-{}% @- shouldn't affect sorting
  %
  % Unfortunately, texindex is not prepared to handle braces in the
  % content at all.  So for index sorting, we map @{ and @} to strings
  % starting with |, since that ASCII character is between ASCII { and }.
  \def\{{|a}%
  \def\lbracechar{|a}%
  %
  \def\}{|b}%
  \def\rbracechar{|b}%
  %
  % Non-English letters.
  \def\AA{AA}%
  \def\AE{AE}%
  \def\DH{DZZ}%
  \def\L{L}%
  \def\OE{OE}%
  \def\O{O}%
  \def\TH{ZZZ}%
  \def\aa{aa}%
  \def\ae{ae}%
  \def\dh{dzz}%
  \def\exclamdown{!}%
  \def\l{l}%
  \def\oe{oe}%
  \def\ordf{a}%
  \def\ordm{o}%
  \def\o{o}%
  \def\questiondown{?}%
  \def\ss{ss}%
  \def\th{zzz}%
  %
  \def\LaTeX{LaTeX}%
  \def\TeX{TeX}%
  %
  % Assorted special characters.
  % (The following {} will end up in the sort string, but that's ok.)
  \def\arrow{->}%
  \def\bullet{bullet}%
  \def\comma{,}%
  \def\copyright{copyright}%
  \def\dots{...}%
  \def\enddots{...}%
  \def\equiv{==}%
  \def\error{error}%
  \def\euro{euro}%
  \def\expansion{==>}%
  \def\geq{>=}%
  \def\guillemetleft{<<}%
  \def\guillemetright{>>}%
  \def\guilsinglleft{<}%
  \def\guilsinglright{>}%
  \def\leq{<=}%
  \def\minus{-}%
  \def\point{.}%
  \def\pounds{pounds}%
  \def\print{-|}%
  \def\quotedblbase{"}%
  \def\quotedblleft{"}%
  \def\quotedblright{"}%
  \def\quoteleft{`}%
  \def\quoteright{'}%
  \def\quotesinglbase{,}%
  \def\registeredsymbol{R}%
  \def\result{=>}%
  \def\textdegree{o}%
  %
  \expandafter\ifx\csname SETtxiindexlquoteignore\endcsname\relax
  \else \indexlquoteignore \fi
  %
  % We need to get rid of all macros, leaving only the arguments (if present).
  % Of course this is not nearly correct, but it is the best we can do for now.
  % makeinfo does not expand macros in the argument to @deffn, which ends up
  % writing an index entry, and texindex isn't prepared for an index sort entry
  % that starts with \.
  %
  % Since macro invocations are followed by braces, we can just redefine them
  % to take a single TeX argument.  The case of a macro invocation that
  % goes to end-of-line is not handled.
  %
  \macrolist
}

% Undocumented (for FSFS 2nd ed.): @set txiindexlquoteignore makes us
% ignore left quotes in the sort term.
{\catcode`\`=\active
 \gdef\indexlquoteignore{\let`=\empty}}

\let\indexbackslash=0  %overridden during \printindex.
\let\SETmarginindex=\relax % put index entries in margin (undocumented)?

% Most index entries go through here, but \dosubind is the general case.
% #1 is the index name, #2 is the entry text.
\def\doind#1#2{\dosubind{#1}{#2}{}}

% Workhorse for all \fooindexes.
% #1 is name of index, #2 is stuff to put there, #3 is subentry --
% empty if called from \doind, as we usually are (the main exception
% is with most defuns, which call us directly).
%
\def\dosubind#1#2#3{%
  \iflinks
  {%
    % Store the main index entry text (including the third arg).
    \toks0 = {#2}%
    % If third arg is present, precede it with a space.
    \def\thirdarg{#3}%
    \ifx\thirdarg\empty \else
      \toks0 = \expandafter{\the\toks0 \space #3}%
    \fi
    %
    \edef\writeto{\csname#1indfile\endcsname}%
    %
    \safewhatsit\dosubindwrite
  }%
  \fi
}

% Write the entry in \toks0 to the index file:
%
\def\dosubindwrite{%
  % Put the index entry in the margin if desired.
  \ifx\SETmarginindex\relax\else
    \insert\margin{\hbox{\vrule height8pt depth3pt width0pt \the\toks0}}%
  \fi
  %
  % Remember, we are within a group.
  \indexdummies % Must do this here, since \bf, etc expand at this stage
  \def\backslashcurfont{\indexbackslash}% \indexbackslash isn't defined now
      % so it will be output as is; and it will print as backslash.
  %
  % Process the index entry with all font commands turned off, to
  % get the string to sort by.
  {\indexnofonts
   \edef\temp{\the\toks0}% need full expansion
   \xdef\indexsorttmp{\temp}%
  }%
  %
  % Set up the complete index entry, with both the sort key and
  % the original text, including any font commands.  We write
  % three arguments to \entry to the .?? file (four in the
  % subentry case), texindex reduces to two when writing the .??s
  % sorted result.
  \edef\temp{%
    \write\writeto{%
      \string\entry{\indexsorttmp}{\noexpand\folio}{\the\toks0}}%
  }%
  \temp
}

% Take care of unwanted page breaks/skips around a whatsit:
%
% If a skip is the last thing on the list now, preserve it
% by backing up by \lastskip, doing the \write, then inserting
% the skip again.  Otherwise, the whatsit generated by the
% \write or \pdfdest will make \lastskip zero.  The result is that
% sequences like this:
% @end defun
% @tindex whatever
% @defun ...
% will have extra space inserted, because the \medbreak in the
% start of the @defun won't see the skip inserted by the @end of
% the previous defun.
%
% But don't do any of this if we're not in vertical mode.  We
% don't want to do a \vskip and prematurely end a paragraph.
%
% Avoid page breaks due to these extra skips, too.
%
% But wait, there is a catch there:
% We'll have to check whether \lastskip is zero skip.  \ifdim is not
% sufficient for this purpose, as it ignores stretch and shrink parts
% of the skip.  The only way seems to be to check the textual
% representation of the skip.
%
% The following is almost like \def\zeroskipmacro{0.0pt} except that
% the ``p'' and ``t'' characters have catcode \other, not 11 (letter).
%
\edef\zeroskipmacro{\expandafter\the\csname z@skip\endcsname}
%
\newskip\whatsitskip
\newcount\whatsitpenalty
%
% ..., ready, GO:
%
\def\safewhatsit#1{\ifhmode
  #1%
 \else
  % \lastskip and \lastpenalty cannot both be nonzero simultaneously.
  \whatsitskip = \lastskip
  \edef\lastskipmacro{\the\lastskip}%
  \whatsitpenalty = \lastpenalty
  %
  % If \lastskip is nonzero, that means the last item was a
  % skip.  And since a skip is discardable, that means this
  % -\whatsitskip glue we're inserting is preceded by a
  % non-discardable item, therefore it is not a potential
  % breakpoint, therefore no \nobreak needed.
  \ifx\lastskipmacro\zeroskipmacro
  \else
    \vskip-\whatsitskip
  \fi
  %
  #1%
  %
  \ifx\lastskipmacro\zeroskipmacro
    % If \lastskip was zero, perhaps the last item was a penalty, and
    % perhaps it was >=10000, e.g., a \nobreak.  In that case, we want
    % to re-insert the same penalty (values >10000 are used for various
    % signals); since we just inserted a non-discardable item, any
    % following glue (such as a \parskip) would be a breakpoint.  For example:
    %   @deffn deffn-whatever
    %   @vindex index-whatever
    %   Description.
    % would allow a break between the index-whatever whatsit
    % and the "Description." paragraph.
    \ifnum\whatsitpenalty>9999 \penalty\whatsitpenalty \fi
  \else
    % On the other hand, if we had a nonzero \lastskip,
    % this make-up glue would be preceded by a non-discardable item
    % (the whatsit from the \write), so we must insert a \nobreak.
    \nobreak\vskip\whatsitskip
  \fi
\fi}

% The index entry written in the file actually looks like
%  \entry {sortstring}{page}{topic}
% or
%  \entry {sortstring}{page}{topic}{subtopic}
% The texindex program reads in these files and writes files
% containing these kinds of lines:
%  \initial {c}
%     before the first topic whose initial is c
%  \entry {topic}{pagelist}
%     for a topic that is used without subtopics
%  \primary {topic}
%     for the beginning of a topic that is used with subtopics
%  \secondary {subtopic}{pagelist}
%     for each subtopic.

% Define the user-accessible indexing commands
% @findex, @vindex, @kindex, @cindex.

\def\findex {\fnindex}
\def\kindex {\kyindex}
\def\cindex {\cpindex}
\def\vindex {\vrindex}
\def\tindex {\tpindex}
\def\pindex {\pgindex}

\def\cindexsub {\begingroup\obeylines\cindexsub}
{\obeylines %
\gdef\cindexsub "#1" #2^^M{\endgroup %
\dosubind{cp}{#2}{#1}}}

% Define the macros used in formatting output of the sorted index material.

% @printindex causes a particular index (the ??s file) to get printed.
% It does not print any chapter heading (usually an @unnumbered).
%
\parseargdef\printindex{\begingroup
  \dobreak \chapheadingskip{10000}%
  %
  \smallfonts \rm
  \tolerance = 9500
  \plainfrenchspacing
  \everypar = {}% don't want the \kern\-parindent from indentation suppression.
  %
  % See if the index file exists and is nonempty.
  % Change catcode of @ here so that if the index file contains
  % \initial {@}
  % as its first line, TeX doesn't complain about mismatched braces
  % (because it thinks @} is a control sequence).
  \catcode`\@ = 11
  \openin 1 \jobname.#1s
  \ifeof 1
    % \enddoublecolumns gets confused if there is no text in the index,
    % and it loses the chapter title and the aux file entries for the
    % index.  The easiest way to prevent this problem is to make sure
    % there is some text.
    \putwordIndexNonexistent
  \else
    %
    % If the index file exists but is empty, then \openin leaves \ifeof
    % false.  We have to make TeX try to read something from the file, so
    % it can discover if there is anything in it.
    \read 1 to \temp
    \ifeof 1
      \putwordIndexIsEmpty
    \else
      % Index files are almost Texinfo source, but we use \ as the escape
      % character.  It would be better to use @, but that's too big a change
      % to make right now.
      \def\indexbackslash{\backslashcurfont}%
      \catcode`\\ = 0
      \escapechar = `\\
      \begindoublecolumns
      \input \jobname.#1s
      \enddoublecolumns
    \fi
  \fi
  \closein 1
\endgroup}

% These macros are used by the sorted index file itself.
% Change them to control the appearance of the index.

\def\initial#1{{%
  % Some minor font changes for the special characters.
  \let\tentt=\sectt \let\tt=\sectt \let\sf=\sectt
  %
  % Remove any glue we may have, we'll be inserting our own.
  \removelastskip
  %
  % We like breaks before the index initials, so insert a bonus.
  \nobreak
  \vskip 0pt plus 3\baselineskip
  \penalty 0
  \vskip 0pt plus -3\baselineskip
  %
  % Typeset the initial.  Making this add up to a whole number of
  % baselineskips increases the chance of the dots lining up from column
  % to column.  It still won't often be perfect, because of the stretch
  % we need before each entry, but it's better.
  %
  % No shrink because it confuses \balancecolumns.
  \vskip 1.67\baselineskip plus .5\baselineskip
  \leftline{\secbf #1}%
  % Do our best not to break after the initial.
  \nobreak
  \vskip .33\baselineskip plus .1\baselineskip
}}

% \entry typesets a paragraph consisting of the text (#1), dot leaders, and
% then page number (#2) flushed to the right margin.  It is used for index
% and table of contents entries.  The paragraph is indented by \leftskip.
%
% A straightforward implementation would start like this:
%	\def\entry#1#2{...
% But this freezes the catcodes in the argument, and can cause problems to
% @code, which sets - active.  This problem was fixed by a kludge---
% ``-'' was active throughout whole index, but this isn't really right.
% The right solution is to prevent \entry from swallowing the whole text.
%                                 --kasal, 21nov03
\def\entry{%
  \begingroup
    %
    % Start a new paragraph if necessary, so our assignments below can't
    % affect previous text.
    \par
    %
    % Do not fill out the last line with white space.
    \parfillskip = 0in
    %
    % No extra space above this paragraph.
    \parskip = 0in
    %
    % Do not prefer a separate line ending with a hyphen to fewer lines.
    \finalhyphendemerits = 0
    %
    % \hangindent is only relevant when the entry text and page number
    % don't both fit on one line.  In that case, bob suggests starting the
    % dots pretty far over on the line.  Unfortunately, a large
    % indentation looks wrong when the entry text itself is broken across
    % lines.  So we use a small indentation and put up with long leaders.
    %
    % \hangafter is reset to 1 (which is the value we want) at the start
    % of each paragraph, so we need not do anything with that.
    \hangindent = 2em
    %
    % When the entry text needs to be broken, just fill out the first line
    % with blank space.
    \rightskip = 0pt plus1fil
    %
    % A bit of stretch before each entry for the benefit of balancing
    % columns.
    \vskip 0pt plus1pt
    %
    % When reading the text of entry, convert explicit line breaks
    % from @* into spaces.  The user might give these in long section
    % titles, for instance.
    \def\*{\unskip\space\ignorespaces}%
    \def\entrybreak{\hfil\break}%
    %
    % Swallow the left brace of the text (first parameter):
    \afterassignment\doentry
    \let\temp =
}
\def\entrybreak{\unskip\space\ignorespaces}%
\def\doentry{%
    \bgroup % Instead of the swallowed brace.
      \noindent
      \aftergroup\finishentry
      % And now comes the text of the entry.
}
\def\finishentry#1{%
    % #1 is the page number.
    %
    % The following is kludged to not output a line of dots in the index if
    % there are no page numbers.  The next person who breaks this will be
    % cursed by a Unix daemon.
    \setbox\boxA = \hbox{#1}%
    \ifdim\wd\boxA = 0pt
      \ %
    \else
      %
      % If we must, put the page number on a line of its own, and fill out
      % this line with blank space.  (The \hfil is overwhelmed with the
      % fill leaders glue in \indexdotfill if the page number does fit.)
      \hfil\penalty50
      \null\nobreak\indexdotfill % Have leaders before the page number.
      %
      % The `\ ' here is removed by the implicit \unskip that TeX does as
      % part of (the primitive) \par.  Without it, a spurious underfull
      % \hbox ensues.
      \ifpdf
	\pdfgettoks#1.%
	\ \the\toksA
      \else
	\ #1%
      \fi
    \fi
    \par
  \endgroup
}

% Like plain.tex's \dotfill, except uses up at least 1 em.
\def\indexdotfill{\cleaders
  \hbox{$\mathsurround=0pt \mkern1.5mu.\mkern1.5mu$}\hskip 1em plus 1fill}

\def\primary #1{\line{#1\hfil}}

\newskip\secondaryindent \secondaryindent=0.5cm
\def\secondary#1#2{{%
  \parfillskip=0in
  \parskip=0in
  \hangindent=1in
  \hangafter=1
  \noindent\hskip\secondaryindent\hbox{#1}\indexdotfill
  \ifpdf
    \pdfgettoks#2.\ \the\toksA % The page number ends the paragraph.
  \else
    #2
  \fi
  \par
}}

% Define two-column mode, which we use to typeset indexes.
% Adapted from the TeXbook, page 416, which is to say,
% the manmac.tex format used to print the TeXbook itself.
\catcode`\@=11

\newbox\partialpage
\newdimen\doublecolumnhsize

\def\begindoublecolumns{\begingroup % ended by \enddoublecolumns
  % Grab any single-column material above us.
  \output = {%
    %
    % Here is a possibility not foreseen in manmac: if we accumulate a
    % whole lot of material, we might end up calling this \output
    % routine twice in a row (see the doublecol-lose test, which is
    % essentially a couple of indexes with @setchapternewpage off).  In
    % that case we just ship out what is in \partialpage with the normal
    % output routine.  Generally, \partialpage will be empty when this
    % runs and this will be a no-op.  See the indexspread.tex test case.
    \ifvoid\partialpage \else
      \onepageout{\pagecontents\partialpage}%
    \fi
    %
    \global\setbox\partialpage = \vbox{%
      % Unvbox the main output page.
      \unvbox\PAGE
      \kern-\topskip \kern\baselineskip
    }%
  }%
  \eject % run that output routine to set \partialpage
  %
  % Use the double-column output routine for subsequent pages.
  \output = {\doublecolumnout}%
  %
  % Change the page size parameters.  We could do this once outside this
  % routine, in each of @smallbook, @afourpaper, and the default 8.5x11
  % format, but then we repeat the same computation.  Repeating a couple
  % of assignments once per index is clearly meaningless for the
  % execution time, so we may as well do it in one place.
  %
  % First we halve the line length, less a little for the gutter between
  % the columns.  We compute the gutter based on the line length, so it
  % changes automatically with the paper format.  The magic constant
  % below is chosen so that the gutter has the same value (well, +-<1pt)
  % as it did when we hard-coded it.
  %
  % We put the result in a separate register, \doublecolumhsize, so we
  % can restore it in \pagesofar, after \hsize itself has (potentially)
  % been clobbered.
  %
  \doublecolumnhsize = \hsize
    \advance\doublecolumnhsize by -.04154\hsize
    \divide\doublecolumnhsize by 2
  \hsize = \doublecolumnhsize
  %
  % Double the \vsize as well.  (We don't need a separate register here,
  % since nobody clobbers \vsize.)
  \vsize = 2\vsize
}

% The double-column output routine for all double-column pages except
% the last.
%
\def\doublecolumnout{%
  \splittopskip=\topskip \splitmaxdepth=\maxdepth
  % Get the available space for the double columns -- the normal
  % (undoubled) page height minus any material left over from the
  % previous page.
  \dimen@ = \vsize
  \divide\dimen@ by 2
  \advance\dimen@ by -\ht\partialpage
  %
  % box0 will be the left-hand column, box2 the right.
  \setbox0=\vsplit255 to\dimen@ \setbox2=\vsplit255 to\dimen@
  \onepageout\pagesofar
  \unvbox255
  \penalty\outputpenalty
}
%
% Re-output the contents of the output page -- any previous material,
% followed by the two boxes we just split, in box0 and box2.
\def\pagesofar{%
  \unvbox\partialpage
  %
  \hsize = \doublecolumnhsize
  \wd0=\hsize \wd2=\hsize
  \hbox to\pagewidth{\box0\hfil\box2}%
}
%
% All done with double columns.
\def\enddoublecolumns{%
  % The following penalty ensures that the page builder is exercised
  % _before_ we change the output routine.  This is necessary in the
  % following situation:
  %
  % The last section of the index consists only of a single entry.
  % Before this section, \pagetotal is less than \pagegoal, so no
  % break occurs before the last section starts.  However, the last
  % section, consisting of \initial and the single \entry, does not
  % fit on the page and has to be broken off.  Without the following
  % penalty the page builder will not be exercised until \eject
  % below, and by that time we'll already have changed the output
  % routine to the \balancecolumns version, so the next-to-last
  % double-column page will be processed with \balancecolumns, which
  % is wrong:  The two columns will go to the main vertical list, with
  % the broken-off section in the recent contributions.  As soon as
  % the output routine finishes, TeX starts reconsidering the page
  % break.  The two columns and the broken-off section both fit on the
  % page, because the two columns now take up only half of the page
  % goal.  When TeX sees \eject from below which follows the final
  % section, it invokes the new output routine that we've set after
  % \balancecolumns below; \onepageout will try to fit the two columns
  % and the final section into the vbox of \pageheight (see
  % \pagebody), causing an overfull box.
  %
  % Note that glue won't work here, because glue does not exercise the
  % page builder, unlike penalties (see The TeXbook, pp. 280-281).
  \penalty0
  %
  \output = {%
    % Split the last of the double-column material.  Leave it on the
    % current page, no automatic page break.
    \balancecolumns
    %
    % If we end up splitting too much material for the current page,
    % though, there will be another page break right after this \output
    % invocation ends.  Having called \balancecolumns once, we do not
    % want to call it again.  Therefore, reset \output to its normal
    % definition right away.  (We hope \balancecolumns will never be
    % called on to balance too much material, but if it is, this makes
    % the output somewhat more palatable.)
    \global\output = {\onepageout{\pagecontents\PAGE}}%
  }%
  \eject
  \endgroup % started in \begindoublecolumns
  %
  % \pagegoal was set to the doubled \vsize above, since we restarted
  % the current page.  We're now back to normal single-column
  % typesetting, so reset \pagegoal to the normal \vsize (after the
  % \endgroup where \vsize got restored).
  \pagegoal = \vsize
}
%
% Called at the end of the double column material.
\def\balancecolumns{%
  \setbox0 = \vbox{\unvbox255}% like \box255 but more efficient, see p.120.
  \dimen@ = \ht0
  \advance\dimen@ by \topskip
  \advance\dimen@ by-\baselineskip
  \divide\dimen@ by 2 % target to split to
  %debug\message{final 2-column material height=\the\ht0, target=\the\dimen@.}%
  \splittopskip = \topskip
  % Loop until we get a decent breakpoint.
  {%
    \vbadness = 10000
    \loop
      \global\setbox3 = \copy0
      \global\setbox1 = \vsplit3 to \dimen@
    \ifdim\ht3>\dimen@
      \global\advance\dimen@ by 1pt
    \repeat
  }%
  %debug\message{split to \the\dimen@, column heights: \the\ht1, \the\ht3.}%
  \setbox0=\vbox to\dimen@{\unvbox1}%
  \setbox2=\vbox to\dimen@{\unvbox3}%
  %
  \pagesofar
}
\catcode`\@ = \other


\message{sectioning,}
% Chapters, sections, etc.

% Let's start with @part.
\outer\parseargdef\part{\partzzz{#1}}
\def\partzzz#1{%
  \chapoddpage
  \null
  \vskip.3\vsize  % move it down on the page a bit
  \begingroup
    \noindent \titlefonts\rmisbold #1\par % the text
    \let\lastnode=\empty      % no node to associate with
    \writetocentry{part}{#1}{}% but put it in the toc
    \headingsoff              % no headline or footline on the part page
    \chapoddpage
  \endgroup
}

% \unnumberedno is an oxymoron.  But we count the unnumbered
% sections so that we can refer to them unambiguously in the pdf
% outlines by their "section number".  We avoid collisions with chapter
% numbers by starting them at 10000.  (If a document ever has 10000
% chapters, we're in trouble anyway, I'm sure.)
\newcount\unnumberedno \unnumberedno = 10000
\newcount\chapno
\newcount\secno        \secno=0
\newcount\subsecno     \subsecno=0
\newcount\subsubsecno  \subsubsecno=0

% This counter is funny since it counts through charcodes of letters A, B, ...
\newcount\appendixno  \appendixno = `\@
%
% \def\appendixletter{\char\the\appendixno}
% We do the following ugly conditional instead of the above simple
% construct for the sake of pdftex, which needs the actual
% letter in the expansion, not just typeset.
%
\def\appendixletter{%
  \ifnum\appendixno=`A A%
  \else\ifnum\appendixno=`B B%
  \else\ifnum\appendixno=`C C%
  \else\ifnum\appendixno=`D D%
  \else\ifnum\appendixno=`E E%
  \else\ifnum\appendixno=`F F%
  \else\ifnum\appendixno=`G G%
  \else\ifnum\appendixno=`H H%
  \else\ifnum\appendixno=`I I%
  \else\ifnum\appendixno=`J J%
  \else\ifnum\appendixno=`K K%
  \else\ifnum\appendixno=`L L%
  \else\ifnum\appendixno=`M M%
  \else\ifnum\appendixno=`N N%
  \else\ifnum\appendixno=`O O%
  \else\ifnum\appendixno=`P P%
  \else\ifnum\appendixno=`Q Q%
  \else\ifnum\appendixno=`R R%
  \else\ifnum\appendixno=`S S%
  \else\ifnum\appendixno=`T T%
  \else\ifnum\appendixno=`U U%
  \else\ifnum\appendixno=`V V%
  \else\ifnum\appendixno=`W W%
  \else\ifnum\appendixno=`X X%
  \else\ifnum\appendixno=`Y Y%
  \else\ifnum\appendixno=`Z Z%
  % The \the is necessary, despite appearances, because \appendixletter is
  % expanded while writing the .toc file.  \char\appendixno is not
  % expandable, thus it is written literally, thus all appendixes come out
  % with the same letter (or @) in the toc without it.
  \else\char\the\appendixno
  \fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi
  \fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi}

% Each @chapter defines these (using marks) as the number+name, number
% and name of the chapter.  Page headings and footings can use
% these.  @section does likewise.
\def\thischapter{}
\def\thischapternum{}
\def\thischaptername{}
\def\thissection{}
\def\thissectionnum{}
\def\thissectionname{}

\newcount\absseclevel % used to calculate proper heading level
\newcount\secbase\secbase=0 % @raisesections/@lowersections modify this count

% @raisesections: treat @section as chapter, @subsection as section, etc.
\def\raisesections{\global\advance\secbase by -1}
\let\up=\raisesections % original BFox name

% @lowersections: treat @chapter as section, @section as subsection, etc.
\def\lowersections{\global\advance\secbase by 1}
\let\down=\lowersections % original BFox name

% we only have subsub.
\chardef\maxseclevel = 3
%
% A numbered section within an unnumbered changes to unnumbered too.
% To achieve this, remember the "biggest" unnum. sec. we are currently in:
\chardef\unnlevel = \maxseclevel
%
% Trace whether the current chapter is an appendix or not:
% \chapheadtype is "N" or "A", unnumbered chapters are ignored.
\def\chapheadtype{N}

% Choose a heading macro
% #1 is heading type
% #2 is heading level
% #3 is text for heading
\def\genhead#1#2#3{%
  % Compute the abs. sec. level:
  \absseclevel=#2
  \advance\absseclevel by \secbase
  % Make sure \absseclevel doesn't fall outside the range:
  \ifnum \absseclevel < 0
    \absseclevel = 0
  \else
    \ifnum \absseclevel > 3
      \absseclevel = 3
    \fi
  \fi
  % The heading type:
  \def\headtype{#1}%
  \if \headtype U%
    \ifnum \absseclevel < \unnlevel
      \chardef\unnlevel = \absseclevel
    \fi
  \else
    % Check for appendix sections:
    \ifnum \absseclevel = 0
      \edef\chapheadtype{\headtype}%
    \else
      \if \headtype A\if \chapheadtype N%
	\errmessage{@appendix... within a non-appendix chapter}%
      \fi\fi
    \fi
    % Check for numbered within unnumbered:
    \ifnum \absseclevel > \unnlevel
      \def\headtype{U}%
    \else
      \chardef\unnlevel = 3
    \fi
  \fi
  % Now print the heading:
  \if \headtype U%
    \ifcase\absseclevel
	\unnumberedzzz{#3}%
    \or \unnumberedseczzz{#3}%
    \or \unnumberedsubseczzz{#3}%
    \or \unnumberedsubsubseczzz{#3}%
    \fi
  \else
    \if \headtype A%
      \ifcase\absseclevel
	  \appendixzzz{#3}%
      \or \appendixsectionzzz{#3}%
      \or \appendixsubseczzz{#3}%
      \or \appendixsubsubseczzz{#3}%
      \fi
    \else
      \ifcase\absseclevel
	  \chapterzzz{#3}%
      \or \seczzz{#3}%
      \or \numberedsubseczzz{#3}%
      \or \numberedsubsubseczzz{#3}%
      \fi
    \fi
  \fi
  \suppressfirstparagraphindent
}

% an interface:
\def\numhead{\genhead N}
\def\apphead{\genhead A}
\def\unnmhead{\genhead U}

% @chapter, @appendix, @unnumbered.  Increment top-level counter, reset
% all lower-level sectioning counters to zero.
%
% Also set \chaplevelprefix, which we prepend to @float sequence numbers
% (e.g., figures), q.v.  By default (before any chapter), that is empty.
\let\chaplevelprefix = \empty
%
\outer\parseargdef\chapter{\numhead0{#1}} % normally numhead0 calls chapterzzz
\def\chapterzzz#1{%
  % section resetting is \global in case the chapter is in a group, such
  % as an @include file.
  \global\secno=0 \global\subsecno=0 \global\subsubsecno=0
    \global\advance\chapno by 1
  %
  % Used for \float.
  \gdef\chaplevelprefix{\the\chapno.}%
  \resetallfloatnos
  %
  % \putwordChapter can contain complex things in translations.
  \toks0=\expandafter{\putwordChapter}%
  \message{\the\toks0 \space \the\chapno}%
  %
  % Write the actual heading.
  \chapmacro{#1}{Ynumbered}{\the\chapno}%
  %
  % So @section and the like are numbered underneath this chapter.
  \global\let\section = \numberedsec
  \global\let\subsection = \numberedsubsec
  \global\let\subsubsection = \numberedsubsubsec
}

\outer\parseargdef\appendix{\apphead0{#1}} % normally calls appendixzzz
%
\def\appendixzzz#1{%
  \global\secno=0 \global\subsecno=0 \global\subsubsecno=0
    \global\advance\appendixno by 1
  \gdef\chaplevelprefix{\appendixletter.}%
  \resetallfloatnos
  %
  % \putwordAppendix can contain complex things in translations.
  \toks0=\expandafter{\putwordAppendix}%
  \message{\the\toks0 \space \appendixletter}%
  %
  \chapmacro{#1}{Yappendix}{\appendixletter}%
  %
  \global\let\section = \appendixsec
  \global\let\subsection = \appendixsubsec
  \global\let\subsubsection = \appendixsubsubsec
}

% normally unnmhead0 calls unnumberedzzz:
\outer\parseargdef\unnumbered{\unnmhead0{#1}}
\def\unnumberedzzz#1{%
  \global\secno=0 \global\subsecno=0 \global\subsubsecno=0
    \global\advance\unnumberedno by 1
  %
  % Since an unnumbered has no number, no prefix for figures.
  \global\let\chaplevelprefix = \empty
  \resetallfloatnos
  %
  % This used to be simply \message{#1}, but TeX fully expands the
  % argument to \message.  Therefore, if #1 contained @-commands, TeX
  % expanded them.  For example, in `@unnumbered The @cite{Book}', TeX
  % expanded @cite (which turns out to cause errors because \cite is meant
  % to be executed, not expanded).
  %
  % Anyway, we don't want the fully-expanded definition of @cite to appear
  % as a result of the \message, we just want `@cite' itself.  We use
  % \the<toks register> to achieve this: TeX expands \the<toks> only once,
  % simply yielding the contents of <toks register>.  (We also do this for
  % the toc entries.)
  \toks0 = {#1}%
  \message{(\the\toks0)}%
  %
  \chapmacro{#1}{Ynothing}{\the\unnumberedno}%
  %
  \global\let\section = \unnumberedsec
  \global\let\subsection = \unnumberedsubsec
  \global\let\subsubsection = \unnumberedsubsubsec
}

% @centerchap is like @unnumbered, but the heading is centered.
\outer\parseargdef\centerchap{%
  % Well, we could do the following in a group, but that would break
  % an assumption that \chapmacro is called at the outermost level.
  % Thus we are safer this way:		--kasal, 24feb04
  \let\centerparametersmaybe = \centerparameters
  \unnmhead0{#1}%
  \let\centerparametersmaybe = \relax
}

% @top is like @unnumbered.
\let\top\unnumbered

% Sections.
% 
\outer\parseargdef\numberedsec{\numhead1{#1}} % normally calls seczzz
\def\seczzz#1{%
  \global\subsecno=0 \global\subsubsecno=0  \global\advance\secno by 1
  \sectionheading{#1}{sec}{Ynumbered}{\the\chapno.\the\secno}%
}

% normally calls appendixsectionzzz:
\outer\parseargdef\appendixsection{\apphead1{#1}}
\def\appendixsectionzzz#1{%
  \global\subsecno=0 \global\subsubsecno=0  \global\advance\secno by 1
  \sectionheading{#1}{sec}{Yappendix}{\appendixletter.\the\secno}%
}
\let\appendixsec\appendixsection

% normally calls unnumberedseczzz:
\outer\parseargdef\unnumberedsec{\unnmhead1{#1}}
\def\unnumberedseczzz#1{%
  \global\subsecno=0 \global\subsubsecno=0  \global\advance\secno by 1
  \sectionheading{#1}{sec}{Ynothing}{\the\unnumberedno.\the\secno}%
}

% Subsections.
% 
% normally calls numberedsubseczzz:
\outer\parseargdef\numberedsubsec{\numhead2{#1}}
\def\numberedsubseczzz#1{%
  \global\subsubsecno=0  \global\advance\subsecno by 1
  \sectionheading{#1}{subsec}{Ynumbered}{\the\chapno.\the\secno.\the\subsecno}%
}

% normally calls appendixsubseczzz:
\outer\parseargdef\appendixsubsec{\apphead2{#1}}
\def\appendixsubseczzz#1{%
  \global\subsubsecno=0  \global\advance\subsecno by 1
  \sectionheading{#1}{subsec}{Yappendix}%
                 {\appendixletter.\the\secno.\the\subsecno}%
}

% normally calls unnumberedsubseczzz:
\outer\parseargdef\unnumberedsubsec{\unnmhead2{#1}}
\def\unnumberedsubseczzz#1{%
  \global\subsubsecno=0  \global\advance\subsecno by 1
  \sectionheading{#1}{subsec}{Ynothing}%
                 {\the\unnumberedno.\the\secno.\the\subsecno}%
}

% Subsubsections.
% 
% normally numberedsubsubseczzz:
\outer\parseargdef\numberedsubsubsec{\numhead3{#1}}
\def\numberedsubsubseczzz#1{%
  \global\advance\subsubsecno by 1
  \sectionheading{#1}{subsubsec}{Ynumbered}%
                 {\the\chapno.\the\secno.\the\subsecno.\the\subsubsecno}%
}

% normally appendixsubsubseczzz:
\outer\parseargdef\appendixsubsubsec{\apphead3{#1}}
\def\appendixsubsubseczzz#1{%
  \global\advance\subsubsecno by 1
  \sectionheading{#1}{subsubsec}{Yappendix}%
                 {\appendixletter.\the\secno.\the\subsecno.\the\subsubsecno}%
}

% normally unnumberedsubsubseczzz:
\outer\parseargdef\unnumberedsubsubsec{\unnmhead3{#1}}
\def\unnumberedsubsubseczzz#1{%
  \global\advance\subsubsecno by 1
  \sectionheading{#1}{subsubsec}{Ynothing}%
                 {\the\unnumberedno.\the\secno.\the\subsecno.\the\subsubsecno}%
}

% These macros control what the section commands do, according
% to what kind of chapter we are in (ordinary, appendix, or unnumbered).
% Define them by default for a numbered chapter.
\let\section = \numberedsec
\let\subsection = \numberedsubsec
\let\subsubsection = \numberedsubsubsec

% Define @majorheading, @heading and @subheading

\def\majorheading{%
  {\advance\chapheadingskip by 10pt \chapbreak }%
  \parsearg\chapheadingzzz
}

\def\chapheading{\chapbreak \parsearg\chapheadingzzz}
\def\chapheadingzzz#1{%
  \vbox{\chapfonts \raggedtitlesettings #1\par}%
  \nobreak\bigskip \nobreak
  \suppressfirstparagraphindent
}

% @heading, @subheading, @subsubheading.
\parseargdef\heading{\sectionheading{#1}{sec}{Yomitfromtoc}{}
  \suppressfirstparagraphindent}
\parseargdef\subheading{\sectionheading{#1}{subsec}{Yomitfromtoc}{}
  \suppressfirstparagraphindent}
\parseargdef\subsubheading{\sectionheading{#1}{subsubsec}{Yomitfromtoc}{}
  \suppressfirstparagraphindent}

% These macros generate a chapter, section, etc. heading only
% (including whitespace, linebreaking, etc. around it),
% given all the information in convenient, parsed form.

% Args are the skip and penalty (usually negative)
\def\dobreak#1#2{\par\ifdim\lastskip<#1\removelastskip\penalty#2\vskip#1\fi}

% Parameter controlling skip before chapter headings (if needed)
\newskip\chapheadingskip

% Define plain chapter starts, and page on/off switching for it.
\def\chapbreak{\dobreak \chapheadingskip {-4000}}
\def\chappager{\par\vfill\supereject}
% Because \domark is called before \chapoddpage, the filler page will
% get the headings for the next chapter, which is wrong.  But we don't
% care -- we just disable all headings on the filler page.
\def\chapoddpage{%
  \chappager
  \ifodd\pageno \else
    \begingroup
      \headingsoff
      \null
      \chappager
    \endgroup
  \fi
}

\def\setchapternewpage #1 {\csname CHAPPAG#1\endcsname}

\def\CHAPPAGoff{%
\global\let\contentsalignmacro = \chappager
\global\let\pchapsepmacro=\chapbreak
\global\let\pagealignmacro=\chappager}

\def\CHAPPAGon{%
\global\let\contentsalignmacro = \chappager
\global\let\pchapsepmacro=\chappager
\global\let\pagealignmacro=\chappager
\global\def\HEADINGSon{\HEADINGSsingle}}

\def\CHAPPAGodd{%
\global\let\contentsalignmacro = \chapoddpage
\global\let\pchapsepmacro=\chapoddpage
\global\let\pagealignmacro=\chapoddpage
\global\def\HEADINGSon{\HEADINGSdouble}}

\CHAPPAGon

% Chapter opening.
%
% #1 is the text, #2 is the section type (Ynumbered, Ynothing,
% Yappendix, Yomitfromtoc), #3 the chapter number.
%
% To test against our argument.
\def\Ynothingkeyword{Ynothing}
\def\Yomitfromtockeyword{Yomitfromtoc}
\def\Yappendixkeyword{Yappendix}
%
\def\chapmacro#1#2#3{%
  % Insert the first mark before the heading break (see notes for \domark).
  \let\prevchapterdefs=\lastchapterdefs
  \let\prevsectiondefs=\lastsectiondefs
  \gdef\lastsectiondefs{\gdef\thissectionname{}\gdef\thissectionnum{}%
                        \gdef\thissection{}}%
  %
  \def\temptype{#2}%
  \ifx\temptype\Ynothingkeyword
    \gdef\lastchapterdefs{\gdef\thischaptername{#1}\gdef\thischapternum{}%
                          \gdef\thischapter{\thischaptername}}%
  \else\ifx\temptype\Yomitfromtockeyword
    \gdef\lastchapterdefs{\gdef\thischaptername{#1}\gdef\thischapternum{}%
                          \gdef\thischapter{}}%
  \else\ifx\temptype\Yappendixkeyword
    \toks0={#1}%
    \xdef\lastchapterdefs{%
      \gdef\noexpand\thischaptername{\the\toks0}%
      \gdef\noexpand\thischapternum{\appendixletter}%
      % \noexpand\putwordAppendix avoids expanding indigestible
      % commands in some of the translations.
      \gdef\noexpand\thischapter{\noexpand\putwordAppendix{}
                                 \noexpand\thischapternum:
                                 \noexpand\thischaptername}%
    }%
  \else
    \toks0={#1}%
    \xdef\lastchapterdefs{%
      \gdef\noexpand\thischaptername{\the\toks0}%
      \gdef\noexpand\thischapternum{\the\chapno}%
      % \noexpand\putwordChapter avoids expanding indigestible
      % commands in some of the translations.
      \gdef\noexpand\thischapter{\noexpand\putwordChapter{}
                                 \noexpand\thischapternum:
                                 \noexpand\thischaptername}%
    }%
  \fi\fi\fi
  %
  % Output the mark.  Pass it through \safewhatsit, to take care of
  % the preceding space.
  \safewhatsit\domark
  %
  % Insert the chapter heading break.
  \pchapsepmacro
  %
  % Now the second mark, after the heading break.  No break points
  % between here and the heading.
  \let\prevchapterdefs=\lastchapterdefs
  \let\prevsectiondefs=\lastsectiondefs
  \domark
  %
  {%
    \chapfonts \rmisbold
    %
    % Have to define \lastsection before calling \donoderef, because the
    % xref code eventually uses it.  On the other hand, it has to be called
    % after \pchapsepmacro, or the headline will change too soon.
    \gdef\lastsection{#1}%
    %
    % Only insert the separating space if we have a chapter/appendix
    % number, and don't print the unnumbered ``number''.
    \ifx\temptype\Ynothingkeyword
      \setbox0 = \hbox{}%
      \def\toctype{unnchap}%
    \else\ifx\temptype\Yomitfromtockeyword
      \setbox0 = \hbox{}% contents like unnumbered, but no toc entry
      \def\toctype{omit}%
    \else\ifx\temptype\Yappendixkeyword
      \setbox0 = \hbox{\putwordAppendix{} #3\enspace}%
      \def\toctype{app}%
    \else
      \setbox0 = \hbox{#3\enspace}%
      \def\toctype{numchap}%
    \fi\fi\fi
    %
    % Write the toc entry for this chapter.  Must come before the
    % \donoderef, because we include the current node name in the toc
    % entry, and \donoderef resets it to empty.
    \writetocentry{\toctype}{#1}{#3}%
    %
    % For pdftex, we have to write out the node definition (aka, make
    % the pdfdest) after any page break, but before the actual text has
    % been typeset.  If the destination for the pdf outline is after the
    % text, then jumping from the outline may wind up with the text not
    % being visible, for instance under high magnification.
    \donoderef{#2}%
    %
    % Typeset the actual heading.
    \nobreak % Avoid page breaks at the interline glue.
    \vbox{\raggedtitlesettings \hangindent=\wd0 \centerparametersmaybe
          \unhbox0 #1\par}%
  }%
  \nobreak\bigskip % no page break after a chapter title
  \nobreak
}

% @centerchap -- centered and unnumbered.
\let\centerparametersmaybe = \relax
\def\centerparameters{%
  \advance\rightskip by 3\rightskip
  \leftskip = \rightskip
  \parfillskip = 0pt
}


% I don't think this chapter style is supported any more, so I'm not
% updating it with the new noderef stuff.  We'll see.  --karl, 11aug03.
%
\def\setchapterstyle #1 {\csname CHAPF#1\endcsname}
%
\def\unnchfopen #1{%
  \chapoddpage
  \vbox{\chapfonts \raggedtitlesettings #1\par}%
  \nobreak\bigskip\nobreak
}
\def\chfopen #1#2{\chapoddpage {\chapfonts
\vbox to 3in{\vfil \hbox to\hsize{\hfil #2} \hbox to\hsize{\hfil #1} \vfil}}%
\par\penalty 5000 %
}
\def\centerchfopen #1{%
  \chapoddpage
  \vbox{\chapfonts \raggedtitlesettings \hfill #1\hfill}%
  \nobreak\bigskip \nobreak
}
\def\CHAPFopen{%
  \global\let\chapmacro=\chfopen
  \global\let\centerchapmacro=\centerchfopen}


% Section titles.  These macros combine the section number parts and
% call the generic \sectionheading to do the printing.
%
\newskip\secheadingskip
\def\secheadingbreak{\dobreak \secheadingskip{-1000}}

% Subsection titles.
\newskip\subsecheadingskip
\def\subsecheadingbreak{\dobreak \subsecheadingskip{-500}}

% Subsubsection titles.
\def\subsubsecheadingskip{\subsecheadingskip}
\def\subsubsecheadingbreak{\subsecheadingbreak}


% Print any size, any type, section title.
%
% #1 is the text, #2 is the section level (sec/subsec/subsubsec), #3 is
% the section type for xrefs (Ynumbered, Ynothing, Yappendix), #4 is the
% section number.
%
\def\seckeyword{sec}
%
\def\sectionheading#1#2#3#4{%
  {%
    \checkenv{}% should not be in an environment.
    %
    % Switch to the right set of fonts.
    \csname #2fonts\endcsname \rmisbold
    %
    \def\sectionlevel{#2}%
    \def\temptype{#3}%
    %
    % Insert first mark before the heading break (see notes for \domark).
    \let\prevsectiondefs=\lastsectiondefs
    \ifx\temptype\Ynothingkeyword
      \ifx\sectionlevel\seckeyword
        \gdef\lastsectiondefs{\gdef\thissectionname{#1}\gdef\thissectionnum{}%
                              \gdef\thissection{\thissectionname}}%
      \fi
    \else\ifx\temptype\Yomitfromtockeyword
      % Don't redefine \thissection.
    \else\ifx\temptype\Yappendixkeyword
      \ifx\sectionlevel\seckeyword
        \toks0={#1}%
        \xdef\lastsectiondefs{%
          \gdef\noexpand\thissectionname{\the\toks0}%
          \gdef\noexpand\thissectionnum{#4}%
          % \noexpand\putwordSection avoids expanding indigestible
          % commands in some of the translations.
          \gdef\noexpand\thissection{\noexpand\putwordSection{}
                                     \noexpand\thissectionnum:
                                     \noexpand\thissectionname}%
        }%
      \fi
    \else
      \ifx\sectionlevel\seckeyword
        \toks0={#1}%
        \xdef\lastsectiondefs{%
          \gdef\noexpand\thissectionname{\the\toks0}%
          \gdef\noexpand\thissectionnum{#4}%
          % \noexpand\putwordSection avoids expanding indigestible
          % commands in some of the translations.
          \gdef\noexpand\thissection{\noexpand\putwordSection{}
                                     \noexpand\thissectionnum:
                                     \noexpand\thissectionname}%
        }%
      \fi
    \fi\fi\fi
    %
    % Go into vertical mode.  Usually we'll already be there, but we
    % don't want the following whatsit to end up in a preceding paragraph
    % if the document didn't happen to have a blank line.
    \par
    %
    % Output the mark.  Pass it through \safewhatsit, to take care of
    % the preceding space.
    \safewhatsit\domark
    %
    % Insert space above the heading.
    \csname #2headingbreak\endcsname
    %
    % Now the second mark, after the heading break.  No break points
    % between here and the heading.
    \let\prevsectiondefs=\lastsectiondefs
    \domark
    %
    % Only insert the space after the number if we have a section number.
    \ifx\temptype\Ynothingkeyword
      \setbox0 = \hbox{}%
      \def\toctype{unn}%
      \gdef\lastsection{#1}%
    \else\ifx\temptype\Yomitfromtockeyword
      % for @headings -- no section number, don't include in toc,
      % and don't redefine \lastsection.
      \setbox0 = \hbox{}%
      \def\toctype{omit}%
      \let\sectionlevel=\empty
    \else\ifx\temptype\Yappendixkeyword
      \setbox0 = \hbox{#4\enspace}%
      \def\toctype{app}%
      \gdef\lastsection{#1}%
    \else
      \setbox0 = \hbox{#4\enspace}%
      \def\toctype{num}%
      \gdef\lastsection{#1}%
    \fi\fi\fi
    %
    % Write the toc entry (before \donoderef).  See comments in \chapmacro.
    \writetocentry{\toctype\sectionlevel}{#1}{#4}%
    %
    % Write the node reference (= pdf destination for pdftex).
    % Again, see comments in \chapmacro.
    \donoderef{#3}%
    %
    % Interline glue will be inserted when the vbox is completed.
    % That glue will be a valid breakpoint for the page, since it'll be
    % preceded by a whatsit (usually from the \donoderef, or from the
    % \writetocentry if there was no node).  We don't want to allow that
    % break, since then the whatsits could end up on page n while the
    % section is on page n+1, thus toc/etc. are wrong.  Debian bug 276000.
    \nobreak
    %
    % Output the actual section heading.
    \vbox{\hyphenpenalty=10000 \tolerance=5000 \parindent=0pt \ptexraggedright
          \hangindent=\wd0  % zero if no section number
          \unhbox0 #1}%
  }%
  % Add extra space after the heading -- half of whatever came above it.
  % Don't allow stretch, though.
  \kern .5 \csname #2headingskip\endcsname
  %
  % Do not let the kern be a potential breakpoint, as it would be if it
  % was followed by glue.
  \nobreak
  %
  % We'll almost certainly start a paragraph next, so don't let that
  % glue accumulate.  (Not a breakpoint because it's preceded by a
  % discardable item.)  However, when a paragraph is not started next
  % (\startdefun, \cartouche, \center, etc.), this needs to be wiped out
  % or the negative glue will cause weirdly wrong output, typically
  % obscuring the section heading with something else.
  \vskip-\parskip
  %
  % This is so the last item on the main vertical list is a known
  % \penalty > 10000, so \startdefun, etc., can recognize the situation
  % and do the needful.
  \penalty 10001
}


\message{toc,}
% Table of contents.
\newwrite\tocfile

% Write an entry to the toc file, opening it if necessary.
% Called from @chapter, etc.
%
% Example usage: \writetocentry{sec}{Section Name}{\the\chapno.\the\secno}
% We append the current node name (if any) and page number as additional
% arguments for the \{chap,sec,...}entry macros which will eventually
% read this.  The node name is used in the pdf outlines as the
% destination to jump to.
%
% We open the .toc file for writing here instead of at @setfilename (or
% any other fixed time) so that @contents can be anywhere in the document.
% But if #1 is `omit', then we don't do anything.  This is used for the
% table of contents chapter openings themselves.
%
\newif\iftocfileopened
\def\omitkeyword{omit}%
%
\def\writetocentry#1#2#3{%
  \edef\writetoctype{#1}%
  \ifx\writetoctype\omitkeyword \else
    \iftocfileopened\else
      \immediate\openout\tocfile = \jobname.toc
      \global\tocfileopenedtrue
    \fi
    %
    \iflinks
      {\atdummies
       \edef\temp{%
         \write\tocfile{@#1entry{#2}{#3}{\lastnode}{\noexpand\folio}}}%
       \temp
      }%
    \fi
  \fi
  %
  % Tell \shipout to create a pdf destination on each page, if we're
  % writing pdf.  These are used in the table of contents.  We can't
  % just write one on every page because the title pages are numbered
  % 1 and 2 (the page numbers aren't printed), and so are the first
  % two pages of the document.  Thus, we'd have two destinations named
  % `1', and two named `2'.
  \ifpdf \global\pdfmakepagedesttrue \fi
}


% These characters do not print properly in the Computer Modern roman
% fonts, so we must take special care.  This is more or less redundant
% with the Texinfo input format setup at the end of this file.
%
\def\activecatcodes{%
  \catcode`\"=\active
  \catcode`\$=\active
  \catcode`\<=\active
  \catcode`\>=\active
  \catcode`\\=\active
  \catcode`\^=\active
  \catcode`\_=\active
  \catcode`\|=\active
  \catcode`\~=\active
}


% Read the toc file, which is essentially Texinfo input.
\def\readtocfile{%
  \setupdatafile
  \activecatcodes
  \input \tocreadfilename
}

\newskip\contentsrightmargin \contentsrightmargin=1in
\newcount\savepageno
\newcount\lastnegativepageno \lastnegativepageno = -1

% Prepare to read what we've written to \tocfile.
%
\def\startcontents#1{%
  % If @setchapternewpage on, and @headings double, the contents should
  % start on an odd page, unlike chapters.  Thus, we maintain
  % \contentsalignmacro in parallel with \pagealignmacro.
  % From: Torbjorn Granlund <tege@matematik.su.se>
  \contentsalignmacro
  \immediate\closeout\tocfile
  %
  % Don't need to put `Contents' or `Short Contents' in the headline.
  % It is abundantly clear what they are.
  \chapmacro{#1}{Yomitfromtoc}{}%
  %
  \savepageno = \pageno
  \begingroup                  % Set up to handle contents files properly.
    \raggedbottom              % Worry more about breakpoints than the bottom.
    \advance\hsize by -\contentsrightmargin % Don't use the full line length.
    %
    % Roman numerals for page numbers.
    \ifnum \pageno>0 \global\pageno = \lastnegativepageno \fi
}

% redefined for the two-volume lispref.  We always output on
% \jobname.toc even if this is redefined.
%
\def\tocreadfilename{\jobname.toc}

% Normal (long) toc.
%
\def\contents{%
  \startcontents{\putwordTOC}%
    \openin 1 \tocreadfilename\space
    \ifeof 1 \else
      \readtocfile
    \fi
    \vfill \eject
    \contentsalignmacro % in case @setchapternewpage odd is in effect
    \ifeof 1 \else
      \pdfmakeoutlines
    \fi
    \closein 1
  \endgroup
  \lastnegativepageno = \pageno
  \global\pageno = \savepageno
}

% And just the chapters.
\def\summarycontents{%
  \startcontents{\putwordShortTOC}%
    %
    \let\partentry = \shortpartentry
    \let\numchapentry = \shortchapentry
    \let\appentry = \shortchapentry
    \let\unnchapentry = \shortunnchapentry
    % We want a true roman here for the page numbers.
    \secfonts
    \let\rm=\shortcontrm \let\bf=\shortcontbf
    \let\sl=\shortcontsl \let\tt=\shortconttt
    \rm
    \hyphenpenalty = 10000
    \advance\baselineskip by 1pt % Open it up a little.
    \def\numsecentry##1##2##3##4{}
    \let\appsecentry = \numsecentry
    \let\unnsecentry = \numsecentry
    \let\numsubsecentry = \numsecentry
    \let\appsubsecentry = \numsecentry
    \let\unnsubsecentry = \numsecentry
    \let\numsubsubsecentry = \numsecentry
    \let\appsubsubsecentry = \numsecentry
    \let\unnsubsubsecentry = \numsecentry
    \openin 1 \tocreadfilename\space
    \ifeof 1 \else
      \readtocfile
    \fi
    \closein 1
    \vfill \eject
    \contentsalignmacro % in case @setchapternewpage odd is in effect
  \endgroup
  \lastnegativepageno = \pageno
  \global\pageno = \savepageno
}
\let\shortcontents = \summarycontents

% Typeset the label for a chapter or appendix for the short contents.
% The arg is, e.g., `A' for an appendix, or `3' for a chapter.
%
\def\shortchaplabel#1{%
  % This space should be enough, since a single number is .5em, and the
  % widest letter (M) is 1em, at least in the Computer Modern fonts.
  % But use \hss just in case.
  % (This space doesn't include the extra space that gets added after
  % the label; that gets put in by \shortchapentry above.)
  %
  % We'd like to right-justify chapter numbers, but that looks strange
  % with appendix letters.  And right-justifying numbers and
  % left-justifying letters looks strange when there is less than 10
  % chapters.  Have to read the whole toc once to know how many chapters
  % there are before deciding ...
  \hbox to 1em{#1\hss}%
}

% These macros generate individual entries in the table of contents.
% The first argument is the chapter or section name.
% The last argument is the page number.
% The arguments in between are the chapter number, section number, ...

% Parts, in the main contents.  Replace the part number, which doesn't
% exist, with an empty box.  Let's hope all the numbers have the same width.
% Also ignore the page number, which is conventionally not printed.
\def\numeralbox{\setbox0=\hbox{8}\hbox to \wd0{\hfil}}
\def\partentry#1#2#3#4{\dochapentry{\numeralbox\labelspace#1}{}}
%
% Parts, in the short toc.
\def\shortpartentry#1#2#3#4{%
  \penalty-300
  \vskip.5\baselineskip plus.15\baselineskip minus.1\baselineskip
  \shortchapentry{{\bf #1}}{\numeralbox}{}{}%
}

% Chapters, in the main contents.
\def\numchapentry#1#2#3#4{\dochapentry{#2\labelspace#1}{#4}}
%
% Chapters, in the short toc.
% See comments in \dochapentry re vbox and related settings.
\def\shortchapentry#1#2#3#4{%
  \tocentry{\shortchaplabel{#2}\labelspace #1}{\doshortpageno\bgroup#4\egroup}%
}

% Appendices, in the main contents.
% Need the word Appendix, and a fixed-size box.
%
\def\appendixbox#1{%
  % We use M since it's probably the widest letter.
  \setbox0 = \hbox{\putwordAppendix{} M}%
  \hbox to \wd0{\putwordAppendix{} #1\hss}}
%
\def\appentry#1#2#3#4{\dochapentry{\appendixbox{#2}\labelspace#1}{#4}}

% Unnumbered chapters.
\def\unnchapentry#1#2#3#4{\dochapentry{#1}{#4}}
\def\shortunnchapentry#1#2#3#4{\tocentry{#1}{\doshortpageno\bgroup#4\egroup}}

% Sections.
\def\numsecentry#1#2#3#4{\dosecentry{#2\labelspace#1}{#4}}
\let\appsecentry=\numsecentry
\def\unnsecentry#1#2#3#4{\dosecentry{#1}{#4}}

% Subsections.
\def\numsubsecentry#1#2#3#4{\dosubsecentry{#2\labelspace#1}{#4}}
\let\appsubsecentry=\numsubsecentry
\def\unnsubsecentry#1#2#3#4{\dosubsecentry{#1}{#4}}

% And subsubsections.
\def\numsubsubsecentry#1#2#3#4{\dosubsubsecentry{#2\labelspace#1}{#4}}
\let\appsubsubsecentry=\numsubsubsecentry
\def\unnsubsubsecentry#1#2#3#4{\dosubsubsecentry{#1}{#4}}

% This parameter controls the indentation of the various levels.
% Same as \defaultparindent.
\newdimen\tocindent \tocindent = 15pt

% Now for the actual typesetting. In all these, #1 is the text and #2 is the
% page number.
%
% If the toc has to be broken over pages, we want it to be at chapters
% if at all possible; hence the \penalty.
\def\dochapentry#1#2{%
   \penalty-300 \vskip1\baselineskip plus.33\baselineskip minus.25\baselineskip
   \begingroup
     \chapentryfonts
     \tocentry{#1}{\dopageno\bgroup#2\egroup}%
   \endgroup
   \nobreak\vskip .25\baselineskip plus.1\baselineskip
}

\def\dosecentry#1#2{\begingroup
  \secentryfonts \leftskip=\tocindent
  \tocentry{#1}{\dopageno\bgroup#2\egroup}%
\endgroup}

\def\dosubsecentry#1#2{\begingroup
  \subsecentryfonts \leftskip=2\tocindent
  \tocentry{#1}{\dopageno\bgroup#2\egroup}%
\endgroup}

\def\dosubsubsecentry#1#2{\begingroup
  \subsubsecentryfonts \leftskip=3\tocindent
  \tocentry{#1}{\dopageno\bgroup#2\egroup}%
\endgroup}

% We use the same \entry macro as for the index entries.
\let\tocentry = \entry

% Space between chapter (or whatever) number and the title.
\def\labelspace{\hskip1em \relax}

\def\dopageno#1{{\rm #1}}
\def\doshortpageno#1{{\rm #1}}

\def\chapentryfonts{\secfonts \rm}
\def\secentryfonts{\textfonts}
\def\subsecentryfonts{\textfonts}
\def\subsubsecentryfonts{\textfonts}


\message{environments,}
% @foo ... @end foo.

% @tex ... @end tex    escapes into raw TeX temporarily.
% One exception: @ is still an escape character, so that @end tex works.
% But \@ or @@ will get a plain @ character.

\envdef\tex{%
  \setupmarkupstyle{tex}%
  \catcode `\\=0 \catcode `\{=1 \catcode `\}=2
  \catcode `\$=3 \catcode `\&=4 \catcode `\#=6
  \catcode `\^=7 \catcode `\_=8 \catcode `\~=\active \let~=\tie
  \catcode `\%=14
  \catcode `\+=\other
  \catcode `\"=\other
  \catcode `\|=\other
  \catcode `\<=\other
  \catcode `\>=\other
  \catcode`\`=\other
  \catcode`\'=\other
  \escapechar=`\\
  %
  % ' is active in math mode (mathcode"8000).  So reset it, and all our
  % other math active characters (just in case), to plain's definitions.
  \mathactive
  %
  \let\b=\ptexb
  \let\bullet=\ptexbullet
  \let\c=\ptexc
  \let\,=\ptexcomma
  \let\.=\ptexdot
  \let\dots=\ptexdots
  \let\equiv=\ptexequiv
  \let\!=\ptexexclam
  \let\i=\ptexi
  \let\indent=\ptexindent
  \let\noindent=\ptexnoindent
  \let\{=\ptexlbrace
  \let\+=\tabalign
  \let\}=\ptexrbrace
  \let\/=\ptexslash
  \let\*=\ptexstar
  \let\t=\ptext
  \expandafter \let\csname top\endcsname=\ptextop  % outer
  \let\frenchspacing=\plainfrenchspacing
  %
  \def\endldots{\mathinner{\ldots\ldots\ldots\ldots}}%
  \def\enddots{\relax\ifmmode\endldots\else$\mathsurround=0pt \endldots\,$\fi}%
  \def\@{@}%
}
% There is no need to define \Etex.

% Define @lisp ... @end lisp.
% @lisp environment forms a group so it can rebind things,
% including the definition of @end lisp (which normally is erroneous).

% Amount to narrow the margins by for @lisp.
\newskip\lispnarrowing \lispnarrowing=0.4in

% This is the definition that ^^M gets inside @lisp, @example, and other
% such environments.  \null is better than a space, since it doesn't
% have any width.
\def\lisppar{\null\endgraf}

% This space is always present above and below environments.
\newskip\envskipamount \envskipamount = 0pt

% Make spacing and below environment symmetrical.  We use \parskip here
% to help in doing that, since in @example-like environments \parskip
% is reset to zero; thus the \afterenvbreak inserts no space -- but the
% start of the next paragraph will insert \parskip.
%
\def\aboveenvbreak{{%
  % =10000 instead of <10000 because of a special case in \itemzzz and
  % \sectionheading, q.v.
  \ifnum \lastpenalty=10000 \else
    \advance\envskipamount by \parskip
    \endgraf
    \ifdim\lastskip<\envskipamount
      \removelastskip
      % it's not a good place to break if the last penalty was \nobreak
      % or better ...
      \ifnum\lastpenalty<10000 \penalty-50 \fi
      \vskip\envskipamount
    \fi
  \fi
}}

\let\afterenvbreak = \aboveenvbreak

% \nonarrowing is a flag.  If "set", @lisp etc don't narrow margins; it will
% also clear it, so that its embedded environments do the narrowing again.
\let\nonarrowing=\relax

% @cartouche ... @end cartouche: draw rectangle w/rounded corners around
% environment contents.
\font\circle=lcircle10
\newdimen\circthick
\newdimen\cartouter\newdimen\cartinner
\newskip\normbskip\newskip\normpskip\newskip\normlskip
\circthick=\fontdimen8\circle
%
\def\ctl{{\circle\char'013\hskip -6pt}}% 6pt from pl file: 1/2charwidth
\def\ctr{{\hskip 6pt\circle\char'010}}
\def\cbl{{\circle\char'012\hskip -6pt}}
\def\cbr{{\hskip 6pt\circle\char'011}}
\def\carttop{\hbox to \cartouter{\hskip\lskip
        \ctl\leaders\hrule height\circthick\hfil\ctr
        \hskip\rskip}}
\def\cartbot{\hbox to \cartouter{\hskip\lskip
        \cbl\leaders\hrule height\circthick\hfil\cbr
        \hskip\rskip}}
%
\newskip\lskip\newskip\rskip

\envdef\cartouche{%
  \ifhmode\par\fi  % can't be in the midst of a paragraph.
  \startsavinginserts
  \lskip=\leftskip \rskip=\rightskip
  \leftskip=0pt\rightskip=0pt % we want these *outside*.
  \cartinner=\hsize \advance\cartinner by-\lskip
  \advance\cartinner by-\rskip
  \cartouter=\hsize
  \advance\cartouter by 18.4pt	% allow for 3pt kerns on either
				% side, and for 6pt waste from
				% each corner char, and rule thickness
  \normbskip=\baselineskip \normpskip=\parskip \normlskip=\lineskip
  % Flag to tell @lisp, etc., not to narrow margin.
  \let\nonarrowing = t%
  %
  % If this cartouche directly follows a sectioning command, we need the
  % \parskip glue (backspaced over by default) or the cartouche can
  % collide with the section heading.
  \ifnum\lastpenalty>10000 \vskip\parskip \penalty\lastpenalty \fi
  %
  \vbox\bgroup
      \baselineskip=0pt\parskip=0pt\lineskip=0pt
      \carttop
      \hbox\bgroup
	  \hskip\lskip
	  \vrule\kern3pt
	  \vbox\bgroup
	      \kern3pt
	      \hsize=\cartinner
	      \baselineskip=\normbskip
	      \lineskip=\normlskip
	      \parskip=\normpskip
	      \vskip -\parskip
	      \comment % For explanation, see the end of def\group.
}
\def\Ecartouche{%
              \ifhmode\par\fi
	      \kern3pt
	  \egroup
	  \kern3pt\vrule
	  \hskip\rskip
      \egroup
      \cartbot
  \egroup
  \checkinserts
}


% This macro is called at the beginning of all the @example variants,
% inside a group.
\newdimen\nonfillparindent
\def\nonfillstart{%
  \aboveenvbreak
  \hfuzz = 12pt % Don't be fussy
  \sepspaces % Make spaces be word-separators rather than space tokens.
  \let\par = \lisppar % don't ignore blank lines
  \obeylines % each line of input is a line of output
  \parskip = 0pt
  % Turn off paragraph indentation but redefine \indent to emulate
  % the normal \indent.
  \nonfillparindent=\parindent
  \parindent = 0pt
  \let\indent\nonfillindent
  %
  \emergencystretch = 0pt % don't try to avoid overfull boxes
  \ifx\nonarrowing\relax
    \advance \leftskip by \lispnarrowing
    \exdentamount=\lispnarrowing
  \else
    \let\nonarrowing = \relax
  \fi
  \let\exdent=\nofillexdent
}

\begingroup
\obeyspaces
% We want to swallow spaces (but not other tokens) after the fake
% @indent in our nonfill-environments, where spaces are normally
% active and set to @tie, resulting in them not being ignored after
% @indent.
\gdef\nonfillindent{\futurelet\temp\nonfillindentcheck}%
\gdef\nonfillindentcheck{%
\ifx\temp %
\expandafter\nonfillindentgobble%
\else%
\leavevmode\nonfillindentbox%
\fi%
}%
\endgroup
\def\nonfillindentgobble#1{\nonfillindent}
\def\nonfillindentbox{\hbox to \nonfillparindent{\hss}}

% If you want all examples etc. small: @set dispenvsize small.
% If you want even small examples the full size: @set dispenvsize nosmall.
% This affects the following displayed environments:
%    @example, @display, @format, @lisp
%
\def\smallword{small}
\def\nosmallword{nosmall}
\let\SETdispenvsize\relax
\def\setnormaldispenv{%
  \ifx\SETdispenvsize\smallword
    % end paragraph for sake of leading, in case document has no blank
    % line.  This is redundant with what happens in \aboveenvbreak, but
    % we need to do it before changing the fonts, and it's inconvenient
    % to change the fonts afterward.
    \ifnum \lastpenalty=10000 \else \endgraf \fi
    \smallexamplefonts \rm
  \fi
}
\def\setsmalldispenv{%
  \ifx\SETdispenvsize\nosmallword
  \else
    \ifnum \lastpenalty=10000 \else \endgraf \fi
    \smallexamplefonts \rm
  \fi
}

% We often define two environments, @foo and @smallfoo.
% Let's do it in one command.  #1 is the env name, #2 the definition.
\def\makedispenvdef#1#2{%
  \expandafter\envdef\csname#1\endcsname {\setnormaldispenv #2}%
  \expandafter\envdef\csname small#1\endcsname {\setsmalldispenv #2}%
  \expandafter\let\csname E#1\endcsname \afterenvbreak
  \expandafter\let\csname Esmall#1\endcsname \afterenvbreak
}

% Define two environment synonyms (#1 and #2) for an environment.
\def\maketwodispenvdef#1#2#3{%
  \makedispenvdef{#1}{#3}%
  \makedispenvdef{#2}{#3}%
}
%
% @lisp: indented, narrowed, typewriter font;
% @example: same as @lisp.
%
% @smallexample and @smalllisp: use smaller fonts.
% Originally contributed by Pavel@xerox.
%
\maketwodispenvdef{lisp}{example}{%
  \nonfillstart
  \tt\setupmarkupstyle{example}%
  \let\kbdfont = \kbdexamplefont % Allow @kbd to do something special.
  \gobble % eat return
}
% @display/@smalldisplay: same as @lisp except keep current font.
%
\makedispenvdef{display}{%
  \nonfillstart
  \gobble
}

% @format/@smallformat: same as @display except don't narrow margins.
%
\makedispenvdef{format}{%
  \let\nonarrowing = t%
  \nonfillstart
  \gobble
}

% @flushleft: same as @format, but doesn't obey \SETdispenvsize.
\envdef\flushleft{%
  \let\nonarrowing = t%
  \nonfillstart
  \gobble
}
\let\Eflushleft = \afterenvbreak

% @flushright.
%
\envdef\flushright{%
  \let\nonarrowing = t%
  \nonfillstart
  \advance\leftskip by 0pt plus 1fill\relax
  \gobble
}
\let\Eflushright = \afterenvbreak


% @raggedright does more-or-less normal line breaking but no right
% justification.  From plain.tex.
\envdef\raggedright{%
  \rightskip0pt plus2em \spaceskip.3333em \xspaceskip.5em\relax
}
\let\Eraggedright\par

\envdef\raggedleft{%
  \parindent=0pt \leftskip0pt plus2em
  \spaceskip.3333em \xspaceskip.5em \parfillskip=0pt
  \hbadness=10000 % Last line will usually be underfull, so turn off
                  % badness reporting.
}
\let\Eraggedleft\par

\envdef\raggedcenter{%
  \parindent=0pt \rightskip0pt plus1em \leftskip0pt plus1em
  \spaceskip.3333em \xspaceskip.5em \parfillskip=0pt
  \hbadness=10000 % Last line will usually be underfull, so turn off
                  % badness reporting.
}
\let\Eraggedcenter\par


% @quotation does normal linebreaking (hence we can't use \nonfillstart)
% and narrows the margins.  We keep \parskip nonzero in general, since
% we're doing normal filling.  So, when using \aboveenvbreak and
% \afterenvbreak, temporarily make \parskip 0.
%
\makedispenvdef{quotation}{\quotationstart}
%
\def\quotationstart{%
  \indentedblockstart % same as \indentedblock, but increase right margin too.
  \ifx\nonarrowing\relax
    \advance\rightskip by \lispnarrowing
  \fi
  \parsearg\quotationlabel
}

% We have retained a nonzero parskip for the environment, since we're
% doing normal filling.
%
\def\Equotation{%
  \par
  \ifx\quotationauthor\thisisundefined\else
    % indent a bit.
    \leftline{\kern 2\leftskip \sl ---\quotationauthor}%
  \fi
  {\parskip=0pt \afterenvbreak}%
}
\def\Esmallquotation{\Equotation}

% If we're given an argument, typeset it in bold with a colon after.
\def\quotationlabel#1{%
  \def\temp{#1}%
  \ifx\temp\empty \else
    {\bf #1: }%
  \fi
}

% @indentedblock is like @quotation, but indents only on the left and
% has no optional argument.
% 
\makedispenvdef{indentedblock}{\indentedblockstart}
%
\def\indentedblockstart{%
  {\parskip=0pt \aboveenvbreak}% because \aboveenvbreak inserts \parskip
  \parindent=0pt
  %
  % @cartouche defines \nonarrowing to inhibit narrowing at next level down.
  \ifx\nonarrowing\relax
    \advance\leftskip by \lispnarrowing
    \exdentamount = \lispnarrowing
  \else
    \let\nonarrowing = \relax
  \fi
}

% Keep a nonzero parskip for the environment, since we're doing normal filling.
%
\def\Eindentedblock{%
  \par
  {\parskip=0pt \afterenvbreak}%
}
\def\Esmallindentedblock{\Eindentedblock}


% LaTeX-like @verbatim...@end verbatim and @verb{<char>...<char>}
% If we want to allow any <char> as delimiter,
% we need the curly braces so that makeinfo sees the @verb command, eg:
% `@verbx...x' would look like the '@verbx' command.  --janneke@gnu.org
%
% [Knuth]: Donald Ervin Knuth, 1996.  The TeXbook.
%
% [Knuth] p.344; only we need to do the other characters Texinfo sets
% active too.  Otherwise, they get lost as the first character on a
% verbatim line.
\def\dospecials{%
  \do\ \do\\\do\{\do\}\do\$\do\&%
  \do\#\do\^\do\^^K\do\_\do\^^A\do\%\do\~%
  \do\<\do\>\do\|\do\@\do+\do\"%
  % Don't do the quotes -- if we do, @set txicodequoteundirected and
  % @set txicodequotebacktick will not have effect on @verb and
  % @verbatim, and ?` and !` ligatures won't get disabled.
  %\do\`\do\'%
}
%
% [Knuth] p. 380
\def\uncatcodespecials{%
  \def\do##1{\catcode`##1=\other}\dospecials}
%
% Setup for the @verb command.
%
% Eight spaces for a tab
\begingroup
  \catcode`\^^I=\active
  \gdef\tabeightspaces{\catcode`\^^I=\active\def^^I{\ \ \ \ \ \ \ \ }}
\endgroup
%
\def\setupverb{%
  \tt  % easiest (and conventionally used) font for verbatim
  \def\par{\leavevmode\endgraf}%
  \setupmarkupstyle{verb}%
  \tabeightspaces
  % Respect line breaks,
  % print special symbols as themselves, and
  % make each space count
  % must do in this order:
  \obeylines \uncatcodespecials \sepspaces
}

% Setup for the @verbatim environment
%
% Real tab expansion.
\newdimen\tabw \setbox0=\hbox{\tt\space} \tabw=8\wd0 % tab amount
%
% We typeset each line of the verbatim in an \hbox, so we can handle
% tabs.  The \global is in case the verbatim line starts with an accent,
% or some other command that starts with a begin-group.  Otherwise, the
% entire \verbbox would disappear at the corresponding end-group, before
% it is typeset.  Meanwhile, we can't have nested verbatim commands
% (can we?), so the \global won't be overwriting itself.
\newbox\verbbox
\def\starttabbox{\global\setbox\verbbox=\hbox\bgroup}
%
\begingroup
  \catcode`\^^I=\active
  \gdef\tabexpand{%
    \catcode`\^^I=\active
    \def^^I{\leavevmode\egroup
      \dimen\verbbox=\wd\verbbox % the width so far, or since the previous tab
      \divide\dimen\verbbox by\tabw
      \multiply\dimen\verbbox by\tabw % compute previous multiple of \tabw
      \advance\dimen\verbbox by\tabw  % advance to next multiple of \tabw
      \wd\verbbox=\dimen\verbbox \box\verbbox \starttabbox
    }%
  }
\endgroup

% start the verbatim environment.
\def\setupverbatim{%
  \let\nonarrowing = t%
  \nonfillstart
  \tt % easiest (and conventionally used) font for verbatim
  % The \leavevmode here is for blank lines.  Otherwise, we would
  % never \starttabox and the \egroup would end verbatim mode.
  \def\par{\leavevmode\egroup\box\verbbox\endgraf}%
  \tabexpand
  \setupmarkupstyle{verbatim}%
  % Respect line breaks,
  % print special symbols as themselves, and
  % make each space count.
  % Must do in this order:
  \obeylines \uncatcodespecials \sepspaces
  \everypar{\starttabbox}%
}

% Do the @verb magic: verbatim text is quoted by unique
% delimiter characters.  Before first delimiter expect a
% right brace, after last delimiter expect closing brace:
%
%    \def\doverb'{'<char>#1<char>'}'{#1}
%
% [Knuth] p. 382; only eat outer {}
\begingroup
  \catcode`[=1\catcode`]=2\catcode`\{=\other\catcode`\}=\other
  \gdef\doverb{#1[\def\next##1#1}[##1\endgroup]\next]
\endgroup
%
\def\verb{\begingroup\setupverb\doverb}
%
%
% Do the @verbatim magic: define the macro \doverbatim so that
% the (first) argument ends when '@end verbatim' is reached, ie:
%
%     \def\doverbatim#1@end verbatim{#1}
%
% For Texinfo it's a lot easier than for LaTeX,
% because texinfo's \verbatim doesn't stop at '\end{verbatim}':
% we need not redefine '\', '{' and '}'.
%
% Inspired by LaTeX's verbatim command set [latex.ltx]
%
\begingroup
  \catcode`\ =\active
  \obeylines %
  % ignore everything up to the first ^^M, that's the newline at the end
  % of the @verbatim input line itself.  Otherwise we get an extra blank
  % line in the output.
  \xdef\doverbatim#1^^M#2@end verbatim{#2\noexpand\end\gobble verbatim}%
  % We really want {...\end verbatim} in the body of the macro, but
  % without the active space; thus we have to use \xdef and \gobble.
\endgroup
%
\envdef\verbatim{%
    \setupverbatim\doverbatim
}
\let\Everbatim = \afterenvbreak


% @verbatiminclude FILE - insert text of file in verbatim environment.
%
\def\verbatiminclude{\parseargusing\filenamecatcodes\doverbatiminclude}
%
\def\doverbatiminclude#1{%
  {%
    \makevalueexpandable
    \setupverbatim
    \indexnofonts       % Allow `@@' and other weird things in file names.
    \wlog{texinfo.tex: doing @verbatiminclude of #1^^J}%
    \input #1
    \afterenvbreak
  }%
}

% @copying ... @end copying.
% Save the text away for @insertcopying later.
%
% We save the uninterpreted tokens, rather than creating a box.
% Saving the text in a box would be much easier, but then all the
% typesetting commands (@smallbook, font changes, etc.) have to be done
% beforehand -- and a) we want @copying to be done first in the source
% file; b) letting users define the frontmatter in as flexible order as
% possible is very desirable.
%
\def\copying{\checkenv{}\begingroup\scanargctxt\docopying}
\def\docopying#1@end copying{\endgroup\def\copyingtext{#1}}
%
\def\insertcopying{%
  \begingroup
    \parindent = 0pt  % paragraph indentation looks wrong on title page
    \scanexp\copyingtext
  \endgroup
}


\message{defuns,}
% @defun etc.

\newskip\defbodyindent \defbodyindent=.4in
\newskip\defargsindent \defargsindent=50pt
\newskip\deflastargmargin \deflastargmargin=18pt
\newcount\defunpenalty

% Start the processing of @deffn:
\def\startdefun{%
  \ifnum\lastpenalty<10000
    \medbreak
    \defunpenalty=10003 % Will keep this @deffn together with the
                        % following @def command, see below.
  \else
    % If there are two @def commands in a row, we'll have a \nobreak,
    % which is there to keep the function description together with its
    % header.  But if there's nothing but headers, we need to allow a
    % break somewhere.  Check specifically for penalty 10002, inserted
    % by \printdefunline, instead of 10000, since the sectioning
    % commands also insert a nobreak penalty, and we don't want to allow
    % a break between a section heading and a defun.
    %
    % As a further refinement, we avoid "club" headers by signalling
    % with penalty of 10003 after the very first @deffn in the
    % sequence (see above), and penalty of 10002 after any following
    % @def command.
    \ifnum\lastpenalty=10002 \penalty2000 \else \defunpenalty=10002 \fi
    %
    % Similarly, after a section heading, do not allow a break.
    % But do insert the glue.
    \medskip  % preceded by discardable penalty, so not a breakpoint
  \fi
  %
  \parindent=0in
  \advance\leftskip by \defbodyindent
  \exdentamount=\defbodyindent
}

\def\dodefunx#1{%
  % First, check whether we are in the right environment:
  \checkenv#1%
  %
  % As above, allow line break if we have multiple x headers in a row.
  % It's not a great place, though.
  \ifnum\lastpenalty=10002 \penalty3000 \else \defunpenalty=10002 \fi
  %
  % And now, it's time to reuse the body of the original defun:
  \expandafter\gobbledefun#1%
}
\def\gobbledefun#1\startdefun{}

% \printdefunline \deffnheader{text}
%
\def\printdefunline#1#2{%
  \begingroup
    % call \deffnheader:
    #1#2 \endheader
    % common ending:
    \interlinepenalty = 10000
    \advance\rightskip by 0pt plus 1fil\relax
    \endgraf
    \nobreak\vskip -\parskip
    \penalty\defunpenalty  % signal to \startdefun and \dodefunx
    % Some of the @defun-type tags do not enable magic parentheses,
    % rendering the following check redundant.  But we don't optimize.
    \checkparencounts
  \endgroup
}

\def\Edefun{\endgraf\medbreak}

% \makedefun{deffn} creates \deffn, \deffnx and \Edeffn;
% the only thing remaining is to define \deffnheader.
%
\def\makedefun#1{%
  \expandafter\let\csname E#1\endcsname = \Edefun
  \edef\temp{\noexpand\domakedefun
    \makecsname{#1}\makecsname{#1x}\makecsname{#1header}}%
  \temp
}

% \domakedefun \deffn \deffnx \deffnheader
%
% Define \deffn and \deffnx, without parameters.
% \deffnheader has to be defined explicitly.
%
\def\domakedefun#1#2#3{%
  \envdef#1{%
    \startdefun
    \doingtypefnfalse    % distinguish typed functions from all else
    \parseargusing\activeparens{\printdefunline#3}%
  }%
  \def#2{\dodefunx#1}%
  \def#3%
}

\newif\ifdoingtypefn       % doing typed function?
\newif\ifrettypeownline    % typeset return type on its own line?

% @deftypefnnewline on|off says whether the return type of typed functions
% are printed on their own line.  This affects @deftypefn, @deftypefun,
% @deftypeop, and @deftypemethod.
% 
\parseargdef\deftypefnnewline{%
  \def\temp{#1}%
  \ifx\temp\onword
    \expandafter\let\csname SETtxideftypefnnl\endcsname
      = \empty
  \else\ifx\temp\offword
    \expandafter\let\csname SETtxideftypefnnl\endcsname
      = \relax
  \else
    \errhelp = \EMsimple
    \errmessage{Unknown @txideftypefnnl value `\temp',
                must be on|off}%
  \fi\fi
}

% Untyped functions:

% @deffn category name args
\makedefun{deffn}{\deffngeneral{}}

% @deffn category class name args
\makedefun{defop}#1 {\defopon{#1\ \putwordon}}

% \defopon {category on}class name args
\def\defopon#1#2 {\deffngeneral{\putwordon\ \code{#2}}{#1\ \code{#2}} }

% \deffngeneral {subind}category name args
%
\def\deffngeneral#1#2 #3 #4\endheader{%
  % Remember that \dosubind{fn}{foo}{} is equivalent to \doind{fn}{foo}.
  \dosubind{fn}{\code{#3}}{#1}%
  \defname{#2}{}{#3}\magicamp\defunargs{#4\unskip}%
}

% Typed functions:

% @deftypefn category type name args
\makedefun{deftypefn}{\deftypefngeneral{}}

% @deftypeop category class type name args
\makedefun{deftypeop}#1 {\deftypeopon{#1\ \putwordon}}

% \deftypeopon {category on}class type name args
\def\deftypeopon#1#2 {\deftypefngeneral{\putwordon\ \code{#2}}{#1\ \code{#2}} }

% \deftypefngeneral {subind}category type name args
%
\def\deftypefngeneral#1#2 #3 #4 #5\endheader{%
  \dosubind{fn}{\code{#4}}{#1}%
  \doingtypefntrue
  \defname{#2}{#3}{#4}\defunargs{#5\unskip}%
}

% Typed variables:

% @deftypevr category type var args
\makedefun{deftypevr}{\deftypecvgeneral{}}

% @deftypecv category class type var args
\makedefun{deftypecv}#1 {\deftypecvof{#1\ \putwordof}}

% \deftypecvof {category of}class type var args
\def\deftypecvof#1#2 {\deftypecvgeneral{\putwordof\ \code{#2}}{#1\ \code{#2}} }

% \deftypecvgeneral {subind}category type var args
%
\def\deftypecvgeneral#1#2 #3 #4 #5\endheader{%
  \dosubind{vr}{\code{#4}}{#1}%
  \defname{#2}{#3}{#4}\defunargs{#5\unskip}%
}

% Untyped variables:

% @defvr category var args
\makedefun{defvr}#1 {\deftypevrheader{#1} {} }

% @defcv category class var args
\makedefun{defcv}#1 {\defcvof{#1\ \putwordof}}

% \defcvof {category of}class var args
\def\defcvof#1#2 {\deftypecvof{#1}#2 {} }

% Types:

% @deftp category name args
\makedefun{deftp}#1 #2 #3\endheader{%
  \doind{tp}{\code{#2}}%
  \defname{#1}{}{#2}\defunargs{#3\unskip}%
}

% Remaining @defun-like shortcuts:
\makedefun{defun}{\deffnheader{\putwordDeffunc} }
\makedefun{defmac}{\deffnheader{\putwordDefmac} }
\makedefun{defspec}{\deffnheader{\putwordDefspec} }
\makedefun{deftypefun}{\deftypefnheader{\putwordDeffunc} }
\makedefun{defvar}{\defvrheader{\putwordDefvar} }
\makedefun{defopt}{\defvrheader{\putwordDefopt} }
\makedefun{deftypevar}{\deftypevrheader{\putwordDefvar} }
\makedefun{defmethod}{\defopon\putwordMethodon}
\makedefun{deftypemethod}{\deftypeopon\putwordMethodon}
\makedefun{defivar}{\defcvof\putwordInstanceVariableof}
\makedefun{deftypeivar}{\deftypecvof\putwordInstanceVariableof}

% \defname, which formats the name of the @def (not the args).
% #1 is the category, such as "Function".
% #2 is the return type, if any.
% #3 is the function name.
%
% We are followed by (but not passed) the arguments, if any.
%
\def\defname#1#2#3{%
  \par
  % Get the values of \leftskip and \rightskip as they were outside the @def...
  \advance\leftskip by -\defbodyindent
  %
  % Determine if we are typesetting the return type of a typed function
  % on a line by itself.
  \rettypeownlinefalse
  \ifdoingtypefn  % doing a typed function specifically?
    % then check user option for putting return type on its own line:
    \expandafter\ifx\csname SETtxideftypefnnl\endcsname\relax \else
      \rettypeownlinetrue
    \fi
  \fi
  %
  % How we'll format the category name.  Putting it in brackets helps
  % distinguish it from the body text that may end up on the next line
  % just below it.
  \def\temp{#1}%
  \setbox0=\hbox{\kern\deflastargmargin \ifx\temp\empty\else [\rm\temp]\fi}
  %
  % Figure out line sizes for the paragraph shape.  We'll always have at
  % least two.
  \tempnum = 2
  %
  % The first line needs space for \box0; but if \rightskip is nonzero,
  % we need only space for the part of \box0 which exceeds it:
  \dimen0=\hsize  \advance\dimen0 by -\wd0  \advance\dimen0 by \rightskip
  %
  % If doing a return type on its own line, we'll have another line.
  \ifrettypeownline
    \advance\tempnum by 1
    \def\maybeshapeline{0in \hsize}%
  \else
    \def\maybeshapeline{}%
  \fi
  %
  % The continuations:
  \dimen2=\hsize  \advance\dimen2 by -\defargsindent
  %
  % The final paragraph shape:
  \parshape \tempnum  0in \dimen0  \maybeshapeline  \defargsindent \dimen2
  %
  % Put the category name at the right margin.
  \noindent
  \hbox to 0pt{%
    \hfil\box0 \kern-\hsize
    % \hsize has to be shortened this way:
    \kern\leftskip
    % Intentionally do not respect \rightskip, since we need the space.
  }%
  %
  % Allow all lines to be underfull without complaint:
  \tolerance=10000 \hbadness=10000
  \exdentamount=\defbodyindent
  {%
    % defun fonts. We use typewriter by default (used to be bold) because:
    % . we're printing identifiers, they should be in tt in principle.
    % . in languages with many accents, such as Czech or French, it's
    %   common to leave accents off identifiers.  The result looks ok in
    %   tt, but exceedingly strange in rm.
    % . we don't want -- and --- to be treated as ligatures.
    % . this still does not fix the ?` and !` ligatures, but so far no
    %   one has made identifiers using them :).
    \df \tt
    \def\temp{#2}% text of the return type
    \ifx\temp\empty\else
      \tclose{\temp}% typeset the return type
      \ifrettypeownline
        % put return type on its own line; prohibit line break following:
        \hfil\vadjust{\nobreak}\break  
      \else
        \space  % type on same line, so just followed by a space
      \fi
    \fi           % no return type
    #3% output function name
  }%
  {\rm\enskip}% hskip 0.5 em of \tenrm
  %
  \boldbrax
  % arguments will be output next, if any.
}

% Print arguments in slanted roman (not ttsl), inconsistently with using
% tt for the name.  This is because literal text is sometimes needed in
% the argument list (groff manual), and ttsl and tt are not very
% distinguishable.  Prevent hyphenation at `-' chars.
%
\def\defunargs#1{%
  % use sl by default (not ttsl),
  % tt for the names.
  \df \sl \hyphenchar\font=0
  %
  % On the other hand, if an argument has two dashes (for instance), we
  % want a way to get ttsl.  We used to recommend @var for that, so
  % leave the code in, but it's strange for @var to lead to typewriter.
  % Nowadays we recommend @code, since the difference between a ttsl hyphen
  % and a tt hyphen is pretty tiny.  @code also disables ?` !`.
  \def\var##1{{\setupmarkupstyle{var}\ttslanted{##1}}}%
  #1%
  \sl\hyphenchar\font=45
}

% We want ()&[] to print specially on the defun line.
%
\def\activeparens{%
  \catcode`\(=\active \catcode`\)=\active
  \catcode`\[=\active \catcode`\]=\active
  \catcode`\&=\active
}

% Make control sequences which act like normal parenthesis chars.
\let\lparen = ( \let\rparen = )

% Be sure that we always have a definition for `(', etc.  For example,
% if the fn name has parens in it, \boldbrax will not be in effect yet,
% so TeX would otherwise complain about undefined control sequence.
{
  \activeparens
  \global\let(=\lparen \global\let)=\rparen
  \global\let[=\lbrack \global\let]=\rbrack
  \global\let& = \&

  \gdef\boldbrax{\let(=\opnr\let)=\clnr\let[=\lbrb\let]=\rbrb}
  \gdef\magicamp{\let&=\amprm}
}

\newcount\parencount

% If we encounter &foo, then turn on ()-hacking afterwards
\newif\ifampseen
\def\amprm#1 {\ampseentrue{\bf\&#1 }}

\def\parenfont{%
  \ifampseen
    % At the first level, print parens in roman,
    % otherwise use the default font.
    \ifnum \parencount=1 \rm \fi
  \else
    % The \sf parens (in \boldbrax) actually are a little bolder than
    % the contained text.  This is especially needed for [ and ] .
    \sf
  \fi
}
\def\infirstlevel#1{%
  \ifampseen
    \ifnum\parencount=1
      #1%
    \fi
  \fi
}
\def\bfafterword#1 {#1 \bf}

\def\opnr{%
  \global\advance\parencount by 1
  {\parenfont(}%
  \infirstlevel \bfafterword
}
\def\clnr{%
  {\parenfont)}%
  \infirstlevel \sl
  \global\advance\parencount by -1
}

\newcount\brackcount
\def\lbrb{%
  \global\advance\brackcount by 1
  {\bf[}%
}
\def\rbrb{%
  {\bf]}%
  \global\advance\brackcount by -1
}

\def\checkparencounts{%
  \ifnum\parencount=0 \else \badparencount \fi
  \ifnum\brackcount=0 \else \badbrackcount \fi
}
% these should not use \errmessage; the glibc manual, at least, actually
% has such constructs (when documenting function pointers).
\def\badparencount{%
  \message{Warning: unbalanced parentheses in @def...}%
  \global\parencount=0
}
\def\badbrackcount{%
  \message{Warning: unbalanced square brackets in @def...}%
  \global\brackcount=0
}


\message{macros,}
% @macro.

% To do this right we need a feature of e-TeX, \scantokens,
% which we arrange to emulate with a temporary file in ordinary TeX.
\ifx\eTeXversion\thisisundefined
  \newwrite\macscribble
  \def\scantokens#1{%
    \toks0={#1}%
    \immediate\openout\macscribble=\jobname.tmp
    \immediate\write\macscribble{\the\toks0}%
    \immediate\closeout\macscribble
    \input \jobname.tmp
  }
\fi

\def\scanmacro#1{\begingroup
  \newlinechar`\^^M
  \let\xeatspaces\eatspaces
  %
  % Undo catcode changes of \startcontents and \doprintindex
  % When called from @insertcopying or (short)caption, we need active
  % backslash to get it printed correctly.  Previously, we had
  % \catcode`\\=\other instead.  We'll see whether a problem appears
  % with macro expansion.				--kasal, 19aug04
  \catcode`\@=0 \catcode`\\=\active \escapechar=`\@
  %
  % ... and for \example:
  \spaceisspace
  %
  % The \empty here causes a following catcode 5 newline to be eaten as
  % part of reading whitespace after a control sequence.  It does not
  % eat a catcode 13 newline.  There's no good way to handle the two
  % cases (untried: maybe e-TeX's \everyeof could help, though plain TeX
  % would then have different behavior).  See the Macro Details node in
  % the manual for the workaround we recommend for macros and
  % line-oriented commands.
  % 
  \scantokens{#1\empty}%
\endgroup}

\def\scanexp#1{%
  \edef\temp{\noexpand\scanmacro{#1}}%
  \temp
}

\newcount\paramno   % Count of parameters
\newtoks\macname    % Macro name
\newif\ifrecursive  % Is it recursive?

% List of all defined macros in the form
%    \definedummyword\macro1\definedummyword\macro2...
% Currently is also contains all @aliases; the list can be split
% if there is a need.
\def\macrolist{}

% Add the macro to \macrolist
\def\addtomacrolist#1{\expandafter \addtomacrolistxxx \csname#1\endcsname}
\def\addtomacrolistxxx#1{%
     \toks0 = \expandafter{\macrolist\definedummyword#1}%
     \xdef\macrolist{\the\toks0}%
}

% Utility routines.
% This does \let #1 = #2, with \csnames; that is,
%   \let \csname#1\endcsname = \csname#2\endcsname
% (except of course we have to play expansion games).
%
\def\cslet#1#2{%
  \expandafter\let
  \csname#1\expandafter\endcsname
  \csname#2\endcsname
}

% Trim leading and trailing spaces off a string.
% Concepts from aro-bend problem 15 (see CTAN).
{\catcode`\@=11
\gdef\eatspaces #1{\expandafter\trim@\expandafter{#1 }}
\gdef\trim@ #1{\trim@@ @#1 @ #1 @ @@}
\gdef\trim@@ #1@ #2@ #3@@{\trim@@@\empty #2 @}
\def\unbrace#1{#1}
\unbrace{\gdef\trim@@@ #1 } #2@{#1}
}

% Trim a single trailing ^^M off a string.
{\catcode`\^^M=\other \catcode`\Q=3%
\gdef\eatcr #1{\eatcra #1Q^^MQ}%
\gdef\eatcra#1^^MQ{\eatcrb#1Q}%
\gdef\eatcrb#1Q#2Q{#1}%
}

% Macro bodies are absorbed as an argument in a context where
% all characters are catcode 10, 11 or 12, except \ which is active
% (as in normal texinfo). It is necessary to change the definition of \
% to recognize macro arguments; this is the job of \mbodybackslash.
%
% Non-ASCII encodings make 8-bit characters active, so un-activate
% them to avoid their expansion.  Must do this non-globally, to
% confine the change to the current group.
%
% It's necessary to have hard CRs when the macro is executed. This is
% done by making ^^M (\endlinechar) catcode 12 when reading the macro
% body, and then making it the \newlinechar in \scanmacro.
%
\def\scanctxt{% used as subroutine
  \catcode`\"=\other
  \catcode`\+=\other
  \catcode`\<=\other
  \catcode`\>=\other
  \catcode`\@=\other
  \catcode`\^=\other
  \catcode`\_=\other
  \catcode`\|=\other
  \catcode`\~=\other
  \ifx\declaredencoding\ascii \else \setnonasciicharscatcodenonglobal\other \fi
}

\def\scanargctxt{% used for copying and captions, not macros.
  \scanctxt
  \catcode`\\=\other
  \catcode`\^^M=\other
}

\def\macrobodyctxt{% used for @macro definitions
  \scanctxt
  \catcode`\{=\other
  \catcode`\}=\other
  \catcode`\^^M=\other
  \usembodybackslash
}

\def\macroargctxt{% used when scanning invocations
  \scanctxt
  \catcode`\\=0
}
% why catcode 0 for \ in the above?  To recognize \\ \{ \} as "escapes"
% for the single characters \ { }.  Thus, we end up with the "commands"
% that would be written @\ @{ @} in a Texinfo document.
% 
% We already have @{ and @}.  For @\, we define it here, and only for
% this purpose, to produce a typewriter backslash (so, the @\ that we
% define for @math can't be used with @macro calls):
%
\def\\{\normalbackslash}%
% 
% We would like to do this for \, too, since that is what makeinfo does.
% But it is not possible, because Texinfo already has a command @, for a
% cedilla accent.  Documents must use @comma{} instead.
%
% \anythingelse will almost certainly be an error of some kind.


% \mbodybackslash is the definition of \ in @macro bodies.
% It maps \foo\ => \csname macarg.foo\endcsname => #N
% where N is the macro parameter number.
% We define \csname macarg.\endcsname to be \realbackslash, so
% \\ in macro replacement text gets you a backslash.
%
{\catcode`@=0 @catcode`@\=@active
 @gdef@usembodybackslash{@let\=@mbodybackslash}
 @gdef@mbodybackslash#1\{@csname macarg.#1@endcsname}
}
\expandafter\def\csname macarg.\endcsname{\realbackslash}

\def\margbackslash#1{\char`\#1 }

\def\macro{\recursivefalse\parsearg\macroxxx}
\def\rmacro{\recursivetrue\parsearg\macroxxx}

\def\macroxxx#1{%
  \getargs{#1}% now \macname is the macname and \argl the arglist
  \ifx\argl\empty       % no arguments
     \paramno=0\relax
  \else
     \expandafter\parsemargdef \argl;%
     \if\paramno>256\relax
       \ifx\eTeXversion\thisisundefined
         \errhelp = \EMsimple
         \errmessage{You need eTeX to compile a file with macros with more than 256 arguments}
       \fi
     \fi
  \fi
  \if1\csname ismacro.\the\macname\endcsname
     \message{Warning: redefining \the\macname}%
  \else
     \expandafter\ifx\csname \the\macname\endcsname \relax
     \else \errmessage{Macro name \the\macname\space already defined}\fi
     \global\cslet{macsave.\the\macname}{\the\macname}%
     \global\expandafter\let\csname ismacro.\the\macname\endcsname=1%
     \addtomacrolist{\the\macname}%
  \fi
  \begingroup \macrobodyctxt
  \ifrecursive \expandafter\parsermacbody
  \else \expandafter\parsemacbody
  \fi}

\parseargdef\unmacro{%
  \if1\csname ismacro.#1\endcsname
    \global\cslet{#1}{macsave.#1}%
    \global\expandafter\let \csname ismacro.#1\endcsname=0%
    % Remove the macro name from \macrolist:
    \begingroup
      \expandafter\let\csname#1\endcsname \relax
      \let\definedummyword\unmacrodo
      \xdef\macrolist{\macrolist}%
    \endgroup
  \else
    \errmessage{Macro #1 not defined}%
  \fi
}

% Called by \do from \dounmacro on each macro.  The idea is to omit any
% macro definitions that have been changed to \relax.
%
\def\unmacrodo#1{%
  \ifx #1\relax
    % remove this
  \else
    \noexpand\definedummyword \noexpand#1%
  \fi
}

% This makes use of the obscure feature that if the last token of a
% <parameter list> is #, then the preceding argument is delimited by
% an opening brace, and that opening brace is not consumed.
\def\getargs#1{\getargsxxx#1{}}
\def\getargsxxx#1#{\getmacname #1 \relax\getmacargs}
\def\getmacname#1 #2\relax{\macname={#1}}
\def\getmacargs#1{\def\argl{#1}}

% For macro processing make @ a letter so that we can make Texinfo private macro names.
\edef\texiatcatcode{\the\catcode`\@}
\catcode `@=11\relax

% Parse the optional {params} list.  Set up \paramno and \paramlist
% so \defmacro knows what to do.  Define \macarg.BLAH for each BLAH
% in the params list to some hook where the argument si to be expanded.  If
% there are less than 10 arguments that hook is to be replaced by ##N where N
% is the position in that list, that is to say the macro arguments are to be
% defined `a la TeX in the macro body.  
%
% That gets used by \mbodybackslash (above).
%
% We need to get `macro parameter char #' into several definitions.
% The technique used is stolen from LaTeX: let \hash be something
% unexpandable, insert that wherever you need a #, and then redefine
% it to # just before using the token list produced.
%
% The same technique is used to protect \eatspaces till just before
% the macro is used.
%
% If there are 10 or more arguments, a different technique is used, where the
% hook remains in the body, and when macro is to be expanded the body is
% processed again to replace the arguments.
%
% In that case, the hook is \the\toks N-1, and we simply set \toks N-1 to the
% argument N value and then \edef  the body (nothing else will expand because of
% the catcode regime underwhich the body was input).
%
% If you compile with TeX (not eTeX), and you have macros with 10 or more
% arguments, you need that no macro has more than 256 arguments, otherwise an
% error is produced.
\def\parsemargdef#1;{%
  \paramno=0\def\paramlist{}%
  \let\hash\relax
  \let\xeatspaces\relax
  \parsemargdefxxx#1,;,%
  % In case that there are 10 or more arguments we parse again the arguments
  % list to set new definitions for the \macarg.BLAH macros corresponding to
  % each BLAH argument. It was anyhow needed to parse already once this list
  % in order to count the arguments, and as macros with at most 9 arguments
  % are by far more frequent than macro with 10 or more arguments, defining
  % twice the \macarg.BLAH macros does not cost too much processing power.
  \ifnum\paramno<10\relax\else
    \paramno0\relax
    \parsemmanyargdef@@#1,;,% 10 or more arguments
  \fi
}
\def\parsemargdefxxx#1,{%
  \if#1;\let\next=\relax
  \else \let\next=\parsemargdefxxx
    \advance\paramno by 1
    \expandafter\edef\csname macarg.\eatspaces{#1}\endcsname
        {\xeatspaces{\hash\the\paramno}}%
    \edef\paramlist{\paramlist\hash\the\paramno,}%
  \fi\next}

\def\parsemmanyargdef@@#1,{%
  \if#1;\let\next=\relax
  \else 
    \let\next=\parsemmanyargdef@@
    \edef\tempb{\eatspaces{#1}}%
    \expandafter\def\expandafter\tempa
       \expandafter{\csname macarg.\tempb\endcsname}%
    % Note that we need some extra \noexpand\noexpand, this is because we
    % don't want \the  to be expanded in the \parsermacbody  as it uses an
    % \xdef .
    \expandafter\edef\tempa
      {\noexpand\noexpand\noexpand\the\toks\the\paramno}%
    \advance\paramno by 1\relax
  \fi\next}

% These two commands read recursive and nonrecursive macro bodies.
% (They're different since rec and nonrec macros end differently.)
%

\catcode `\@\texiatcatcode
\long\def\parsemacbody#1@end macro%
{\xdef\temp{\eatcr{#1}}\endgroup\defmacro}%
\long\def\parsermacbody#1@end rmacro%
{\xdef\temp{\eatcr{#1}}\endgroup\defmacro}%
\catcode `\@=11\relax

\let\endargs@\relax
\let\nil@\relax
\def\nilm@{\nil@}%
\long\def\nillm@{\nil@}%

% This macro is expanded during the Texinfo macro expansion, not during its
% definition.  It gets all the arguments values and assigns them to macros
% macarg.ARGNAME
%
% #1 is the macro name
% #2 is the list of argument names
% #3 is the list of argument values
\def\getargvals@#1#2#3{%
  \def\macargdeflist@{}%
  \def\saveparamlist@{#2}% Need to keep a copy for parameter expansion.
  \def\paramlist{#2,\nil@}%
  \def\macroname{#1}%
  \begingroup
  \macroargctxt
  \def\argvaluelist{#3,\nil@}%
  \def\@tempa{#3}%
  \ifx\@tempa\empty
    \setemptyargvalues@
  \else
    \getargvals@@
  \fi
}

% 
\def\getargvals@@{%
  \ifx\paramlist\nilm@
      % Some sanity check needed here that \argvaluelist is also empty.
      \ifx\argvaluelist\nillm@
      \else
        \errhelp = \EMsimple
        \errmessage{Too many arguments in macro `\macroname'!}%
      \fi
      \let\next\macargexpandinbody@
  \else
    \ifx\argvaluelist\nillm@
       % No more arguments values passed to macro.  Set remaining named-arg
       % macros to empty.
       \let\next\setemptyargvalues@
    \else
      % pop current arg name into \@tempb
      \def\@tempa##1{\pop@{\@tempb}{\paramlist}##1\endargs@}%
      \expandafter\@tempa\expandafter{\paramlist}%
       % pop current argument value into \@tempc
      \def\@tempa##1{\longpop@{\@tempc}{\argvaluelist}##1\endargs@}%
      \expandafter\@tempa\expandafter{\argvaluelist}%
       % Here \@tempb is the current arg name and \@tempc is the current arg value.
       % First place the new argument macro definition into \@tempd
       \expandafter\macname\expandafter{\@tempc}%
       \expandafter\let\csname macarg.\@tempb\endcsname\relax
       \expandafter\def\expandafter\@tempe\expandafter{%
         \csname macarg.\@tempb\endcsname}%
       \edef\@tempd{\long\def\@tempe{\the\macname}}%
       \push@\@tempd\macargdeflist@
       \let\next\getargvals@@
    \fi
  \fi
  \next
}

\def\push@#1#2{%
  \expandafter\expandafter\expandafter\def
  \expandafter\expandafter\expandafter#2%
  \expandafter\expandafter\expandafter{%
  \expandafter#1#2}%
}

% Replace arguments by their values in the macro body, and place the result
% in macro \@tempa
\def\macvalstoargs@{%
  %  To do this we use the property that token registers that are \the'ed
  % within an \edef  expand only once. So we are going to place all argument
  % values into respective token registers.
  %
  % First we save the token context, and initialize argument numbering.
  \begingroup
    \paramno0\relax
    % Then, for each argument number #N, we place the corresponding argument
    % value into a new token list register \toks#N
    \expandafter\putargsintokens@\saveparamlist@,;,%
    % Then, we expand the body so that argument are replaced by their
    % values. The trick for values not to be expanded themselves is that they
    % are within tokens and that tokens expand only once in an \edef .
    \edef\@tempc{\csname mac.\macroname .body\endcsname}%
    % Now we restore the token stack pointer to free the token list registers
    % which we have used, but we make sure that expanded body is saved after
    % group.
    \expandafter
  \endgroup
  \expandafter\def\expandafter\@tempa\expandafter{\@tempc}%
  }

\def\macargexpandinbody@{% 
  %% Define the named-macro outside of this group and then close this group. 
  \expandafter
  \endgroup
  \macargdeflist@
  % First the replace in body the macro arguments by their values, the result
  % is in \@tempa .
  \macvalstoargs@
  % Then we point at the \norecurse or \gobble (for recursive) macro value
  % with \@tempb .
  \expandafter\let\expandafter\@tempb\csname mac.\macroname .recurse\endcsname
  % Depending on whether it is recursive or not, we need some tailing
  % \egroup .
  \ifx\@tempb\gobble
     \let\@tempc\relax
  \else
     \let\@tempc\egroup
  \fi
  % And now we do the real job:
  \edef\@tempd{\noexpand\@tempb{\macroname}\noexpand\scanmacro{\@tempa}\@tempc}%
  \@tempd
}

\def\putargsintokens@#1,{%
  \if#1;\let\next\relax
  \else
    \let\next\putargsintokens@
    % First we allocate the new token list register, and give it a temporary
    % alias \@tempb .
    \toksdef\@tempb\the\paramno
    % Then we place the argument value into that token list register.
    \expandafter\let\expandafter\@tempa\csname macarg.#1\endcsname
    \expandafter\@tempb\expandafter{\@tempa}%
    \advance\paramno by 1\relax
  \fi
  \next
}

% Save the token stack pointer into macro #1
\def\texisavetoksstackpoint#1{\edef#1{\the\@cclvi}}
% Restore the token stack pointer from number in macro #1
\def\texirestoretoksstackpoint#1{\expandafter\mathchardef\expandafter\@cclvi#1\relax}
% newtoks that can be used non \outer .
\def\texinonouternewtoks{\alloc@ 5\toks \toksdef \@cclvi}

% Tailing missing arguments are set to empty
\def\setemptyargvalues@{%
  \ifx\paramlist\nilm@
    \let\next\macargexpandinbody@
  \else
    \expandafter\setemptyargvaluesparser@\paramlist\endargs@
    \let\next\setemptyargvalues@
  \fi
  \next
}

\def\setemptyargvaluesparser@#1,#2\endargs@{%
  \expandafter\def\expandafter\@tempa\expandafter{%
    \expandafter\def\csname macarg.#1\endcsname{}}%
  \push@\@tempa\macargdeflist@
  \def\paramlist{#2}%
}

% #1 is the element target macro
% #2 is the list macro
% #3,#4\endargs@ is the list value
\def\pop@#1#2#3,#4\endargs@{%
   \def#1{#3}%
   \def#2{#4}%
}
\long\def\longpop@#1#2#3,#4\endargs@{%
   \long\def#1{#3}%
   \long\def#2{#4}%
}

% This defines a Texinfo @macro. There are eight cases: recursive and
% nonrecursive macros of zero, one, up to nine, and many arguments.
% Much magic with \expandafter here.
% \xdef is used so that macro definitions will survive the file
% they're defined in; @include reads the file inside a group.
%
\def\defmacro{%
  \let\hash=##% convert placeholders to macro parameter chars
  \ifrecursive
    \ifcase\paramno
    % 0
      \expandafter\xdef\csname\the\macname\endcsname{%
        \noexpand\scanmacro{\temp}}%
    \or % 1
      \expandafter\xdef\csname\the\macname\endcsname{%
         \bgroup\noexpand\macroargctxt
         \noexpand\braceorline
         \expandafter\noexpand\csname\the\macname xxx\endcsname}%
      \expandafter\xdef\csname\the\macname xxx\endcsname##1{%
         \egroup\noexpand\scanmacro{\temp}}%
    \else
      \ifnum\paramno<10\relax % at most 9
        \expandafter\xdef\csname\the\macname\endcsname{%
           \bgroup\noexpand\macroargctxt
           \noexpand\csname\the\macname xx\endcsname}%
        \expandafter\xdef\csname\the\macname xx\endcsname##1{%
            \expandafter\noexpand\csname\the\macname xxx\endcsname ##1,}%
        \expandafter\expandafter
        \expandafter\xdef
        \expandafter\expandafter
          \csname\the\macname xxx\endcsname
            \paramlist{\egroup\noexpand\scanmacro{\temp}}%
      \else % 10 or more
        \expandafter\xdef\csname\the\macname\endcsname{%
          \noexpand\getargvals@{\the\macname}{\argl}%
        }%    
        \global\expandafter\let\csname mac.\the\macname .body\endcsname\temp
        \global\expandafter\let\csname mac.\the\macname .recurse\endcsname\gobble
      \fi
    \fi
  \else
    \ifcase\paramno
    % 0
      \expandafter\xdef\csname\the\macname\endcsname{%
        \noexpand\norecurse{\the\macname}%
        \noexpand\scanmacro{\temp}\egroup}%
    \or % 1
      \expandafter\xdef\csname\the\macname\endcsname{%
         \bgroup\noexpand\macroargctxt
         \noexpand\braceorline
         \expandafter\noexpand\csname\the\macname xxx\endcsname}%
      \expandafter\xdef\csname\the\macname xxx\endcsname##1{%
        \egroup
        \noexpand\norecurse{\the\macname}%
        \noexpand\scanmacro{\temp}\egroup}%
    \else % at most 9
      \ifnum\paramno<10\relax
        \expandafter\xdef\csname\the\macname\endcsname{%
           \bgroup\noexpand\macroargctxt
           \expandafter\noexpand\csname\the\macname xx\endcsname}%
        \expandafter\xdef\csname\the\macname xx\endcsname##1{%
            \expandafter\noexpand\csname\the\macname xxx\endcsname ##1,}%
        \expandafter\expandafter
        \expandafter\xdef
        \expandafter\expandafter
        \csname\the\macname xxx\endcsname
        \paramlist{%
            \egroup
            \noexpand\norecurse{\the\macname}%
            \noexpand\scanmacro{\temp}\egroup}%
      \else % 10 or more:
        \expandafter\xdef\csname\the\macname\endcsname{%
          \noexpand\getargvals@{\the\macname}{\argl}%
        }%
        \global\expandafter\let\csname mac.\the\macname .body\endcsname\temp
        \global\expandafter\let\csname mac.\the\macname .recurse\endcsname\norecurse
      \fi
    \fi
  \fi}

\catcode `\@\texiatcatcode\relax

\def\norecurse#1{\bgroup\cslet{#1}{macsave.#1}}

% \braceorline decides whether the next nonwhitespace character is a
% {.  If so it reads up to the closing }, if not, it reads the whole
% line.  Whatever was read is then fed to the next control sequence
% as an argument (by \parsebrace or \parsearg).
% 
\def\braceorline#1{\let\macnamexxx=#1\futurelet\nchar\braceorlinexxx}
\def\braceorlinexxx{%
  \ifx\nchar\bgroup\else
    \expandafter\parsearg
  \fi \macnamexxx}


% @alias.
% We need some trickery to remove the optional spaces around the equal
% sign.  Make them active and then expand them all to nothing.
%
\def\alias{\parseargusing\obeyspaces\aliasxxx}
\def\aliasxxx #1{\aliasyyy#1\relax}
\def\aliasyyy #1=#2\relax{%
  {%
    \expandafter\let\obeyedspace=\empty
    \addtomacrolist{#1}%
    \xdef\next{\global\let\makecsname{#1}=\makecsname{#2}}%
  }%
  \next
}


\message{cross references,}

\newwrite\auxfile
\newif\ifhavexrefs    % True if xref values are known.
\newif\ifwarnedxrefs  % True if we warned once that they aren't known.

% @inforef is relatively simple.
\def\inforef #1{\inforefzzz #1,,,,**}
\def\inforefzzz #1,#2,#3,#4**{%
  \putwordSee{} \putwordInfo{} \putwordfile{} \file{\ignorespaces #3{}},
  node \samp{\ignorespaces#1{}}}

% @node's only job in TeX is to define \lastnode, which is used in
% cross-references.  The @node line might or might not have commas, and
% might or might not have spaces before the first comma, like:
% @node foo , bar , ...
% We don't want such trailing spaces in the node name.
%
\parseargdef\node{\checkenv{}\donode #1 ,\finishnodeparse}
%
% also remove a trailing comma, in case of something like this:
% @node Help-Cross,  ,  , Cross-refs
\def\donode#1 ,#2\finishnodeparse{\dodonode #1,\finishnodeparse}
\def\dodonode#1,#2\finishnodeparse{\gdef\lastnode{#1}}

\let\nwnode=\node
\let\lastnode=\empty

% Write a cross-reference definition for the current node.  #1 is the
% type (Ynumbered, Yappendix, Ynothing).
%
\def\donoderef#1{%
  \ifx\lastnode\empty\else
    \setref{\lastnode}{#1}%
    \global\let\lastnode=\empty
  \fi
}

% @anchor{NAME} -- define xref target at arbitrary point.
%
\newcount\savesfregister
%
\def\savesf{\relax \ifhmode \savesfregister=\spacefactor \fi}
\def\restoresf{\relax \ifhmode \spacefactor=\savesfregister \fi}
\def\anchor#1{\savesf \setref{#1}{Ynothing}\restoresf \ignorespaces}

% \setref{NAME}{SNT} defines a cross-reference point NAME (a node or an
% anchor), which consists of three parts:
% 1) NAME-title - the current sectioning name taken from \lastsection,
%                 or the anchor name.
% 2) NAME-snt   - section number and type, passed as the SNT arg, or
%                 empty for anchors.
% 3) NAME-pg    - the page number.
%
% This is called from \donoderef, \anchor, and \dofloat.  In the case of
% floats, there is an additional part, which is not written here:
% 4) NAME-lof   - the text as it should appear in a @listoffloats.
%
\def\setref#1#2{%
  \pdfmkdest{#1}%
  \iflinks
    {%
      \atdummies  % preserve commands, but don't expand them
      \edef\writexrdef##1##2{%
	\write\auxfile{@xrdef{#1-% #1 of \setref, expanded by the \edef
	  ##1}{##2}}% these are parameters of \writexrdef
      }%
      \toks0 = \expandafter{\lastsection}%
      \immediate \writexrdef{title}{\the\toks0 }%
      \immediate \writexrdef{snt}{\csname #2\endcsname}% \Ynumbered etc.
      \safewhatsit{\writexrdef{pg}{\folio}}% will be written later, at \shipout
    }%
  \fi
}

% @xrefautosectiontitle on|off says whether @section(ing) names are used
% automatically in xrefs, if the third arg is not explicitly specified.
% This was provided as a "secret" @set xref-automatic-section-title
% variable, now it's official.
% 
\parseargdef\xrefautomaticsectiontitle{%
  \def\temp{#1}%
  \ifx\temp\onword
    \expandafter\let\csname SETxref-automatic-section-title\endcsname
      = \empty
  \else\ifx\temp\offword
    \expandafter\let\csname SETxref-automatic-section-title\endcsname
      = \relax
  \else
    \errhelp = \EMsimple
    \errmessage{Unknown @xrefautomaticsectiontitle value `\temp',
                must be on|off}%
  \fi\fi
}

% 
% @xref, @pxref, and @ref generate cross-references.  For \xrefX, #1 is
% the node name, #2 the name of the Info cross-reference, #3 the printed
% node name, #4 the name of the Info file, #5 the name of the printed
% manual.  All but the node name can be omitted.
%
\def\pxref#1{\putwordsee{} \xrefX[#1,,,,,,,]}
\def\xref#1{\putwordSee{} \xrefX[#1,,,,,,,]}
\def\ref#1{\xrefX[#1,,,,,,,]}
%
\newbox\toprefbox
\newbox\printedrefnamebox
\newbox\infofilenamebox
\newbox\printedmanualbox
%
\def\xrefX[#1,#2,#3,#4,#5,#6]{\begingroup
  \unsepspaces
  %
  % Get args without leading/trailing spaces.
  \def\printedrefname{\ignorespaces #3}%
  \setbox\printedrefnamebox = \hbox{\printedrefname\unskip}%
  %
  \def\infofilename{\ignorespaces #4}%
  \setbox\infofilenamebox = \hbox{\infofilename\unskip}%
  %
  \def\printedmanual{\ignorespaces #5}%
  \setbox\printedmanualbox  = \hbox{\printedmanual\unskip}%
  %
  % If the printed reference name (arg #3) was not explicitly given in
  % the @xref, figure out what we want to use.
  \ifdim \wd\printedrefnamebox = 0pt
    % No printed node name was explicitly given.
    \expandafter\ifx\csname SETxref-automatic-section-title\endcsname \relax
      % Not auto section-title: use node name inside the square brackets.
      \def\printedrefname{\ignorespaces #1}%
    \else
      % Auto section-title: use chapter/section title inside
      % the square brackets if we have it.
      \ifdim \wd\printedmanualbox > 0pt
        % It is in another manual, so we don't have it; use node name.
        \def\printedrefname{\ignorespaces #1}%
      \else
        \ifhavexrefs
          % We (should) know the real title if we have the xref values.
          \def\printedrefname{\refx{#1-title}{}}%
        \else
          % Otherwise just copy the Info node name.
          \def\printedrefname{\ignorespaces #1}%
        \fi%
      \fi
    \fi
  \fi
  %
  % Make link in pdf output.
  \ifpdf
    {\indexnofonts
     \turnoffactive
     \makevalueexpandable
     % This expands tokens, so do it after making catcode changes, so _
     % etc. don't get their TeX definitions.  This ignores all spaces in
     % #4, including (wrongly) those in the middle of the filename.
     \getfilename{#4}%
     %
     % This (wrongly) does not take account of leading or trailing
     % spaces in #1, which should be ignored.
     \edef\pdfxrefdest{#1}%
     \ifx\pdfxrefdest\empty
       \def\pdfxrefdest{Top}% no empty targets
     \else
       \txiescapepdf\pdfxrefdest  % escape PDF special chars
     \fi
     %
     \leavevmode
     \startlink attr{/Border [0 0 0]}%
     \ifnum\filenamelength>0
       goto file{\the\filename.pdf} name{\pdfxrefdest}%
     \else
       goto name{\pdfmkpgn{\pdfxrefdest}}%
     \fi
    }%
    \setcolor{\linkcolor}%
  \fi
  %
  % Float references are printed completely differently: "Figure 1.2"
  % instead of "[somenode], p.3".  We distinguish them by the
  % LABEL-title being set to a magic string.
  {%
    % Have to otherify everything special to allow the \csname to
    % include an _ in the xref name, etc.
    \indexnofonts
    \turnoffactive
    \expandafter\global\expandafter\let\expandafter\Xthisreftitle
      \csname XR#1-title\endcsname
  }%
  \iffloat\Xthisreftitle
    % If the user specified the print name (third arg) to the ref,
    % print it instead of our usual "Figure 1.2".
    \ifdim\wd\printedrefnamebox = 0pt
      \refx{#1-snt}{}%
    \else
      \printedrefname
    \fi
    %
    % If the user also gave the printed manual name (fifth arg), append
    % "in MANUALNAME".
    \ifdim \wd\printedmanualbox > 0pt
      \space \putwordin{} \cite{\printedmanual}%
    \fi
  \else
    % node/anchor (non-float) references.
    % 
    % If we use \unhbox to print the node names, TeX does not insert
    % empty discretionaries after hyphens, which means that it will not
    % find a line break at a hyphen in a node names.  Since some manuals
    % are best written with fairly long node names, containing hyphens,
    % this is a loss.  Therefore, we give the text of the node name
    % again, so it is as if TeX is seeing it for the first time.
    % 
    \ifdim \wd\printedmanualbox > 0pt
      % Cross-manual reference with a printed manual name.
      % 
      \crossmanualxref{\cite{\printedmanual\unskip}}%
    %
    \else\ifdim \wd\infofilenamebox > 0pt
      % Cross-manual reference with only an info filename (arg 4), no
      % printed manual name (arg 5).  This is essentially the same as
      % the case above; we output the filename, since we have nothing else.
      % 
      \crossmanualxref{\code{\infofilename\unskip}}%
    %
    \else
      % Reference within this manual.
      %
      % _ (for example) has to be the character _ for the purposes of the
      % control sequence corresponding to the node, but it has to expand
      % into the usual \leavevmode...\vrule stuff for purposes of
      % printing. So we \turnoffactive for the \refx-snt, back on for the
      % printing, back off for the \refx-pg.
      {\turnoffactive
       % Only output a following space if the -snt ref is nonempty; for
       % @unnumbered and @anchor, it won't be.
       \setbox2 = \hbox{\ignorespaces \refx{#1-snt}{}}%
       \ifdim \wd2 > 0pt \refx{#1-snt}\space\fi
      }%
      % output the `[mynode]' via the macro below so it can be overridden.
      \xrefprintnodename\printedrefname
      %
      % But we always want a comma and a space:
      ,\space
      %
      % output the `page 3'.
      \turnoffactive \putwordpage\tie\refx{#1-pg}{}%
    \fi\fi
  \fi
  \endlink
\endgroup}

% Output a cross-manual xref to #1.  Used just above (twice).
% 
% Only include the text "Section ``foo'' in" if the foo is neither
% missing or Top.  Thus, @xref{,,,foo,The Foo Manual} outputs simply
% "see The Foo Manual", the idea being to refer to the whole manual.
% 
% But, this being TeX, we can't easily compare our node name against the
% string "Top" while ignoring the possible spaces before and after in
% the input.  By adding the arbitrary 7sp below, we make it much less
% likely that a real node name would have the same width as "Top" (e.g.,
% in a monospaced font).  Hopefully it will never happen in practice.
% 
% For the same basic reason, we retypeset the "Top" at every
% reference, since the current font is indeterminate.
% 
\def\crossmanualxref#1{%
  \setbox\toprefbox = \hbox{Top\kern7sp}%
  \setbox2 = \hbox{\ignorespaces \printedrefname \unskip \kern7sp}%
  \ifdim \wd2 > 7sp  % nonempty?
    \ifdim \wd2 = \wd\toprefbox \else  % same as Top?
      \putwordSection{} ``\printedrefname'' \putwordin{}\space
    \fi
  \fi
  #1%
}

% This macro is called from \xrefX for the `[nodename]' part of xref
% output.  It's a separate macro only so it can be changed more easily,
% since square brackets don't work well in some documents.  Particularly
% one that Bob is working on :).
%
\def\xrefprintnodename#1{[#1]}

% Things referred to by \setref.
%
\def\Ynothing{}
\def\Yomitfromtoc{}
\def\Ynumbered{%
  \ifnum\secno=0
    \putwordChapter@tie \the\chapno
  \else \ifnum\subsecno=0
    \putwordSection@tie \the\chapno.\the\secno
  \else \ifnum\subsubsecno=0
    \putwordSection@tie \the\chapno.\the\secno.\the\subsecno
  \else
    \putwordSection@tie \the\chapno.\the\secno.\the\subsecno.\the\subsubsecno
  \fi\fi\fi
}
\def\Yappendix{%
  \ifnum\secno=0
     \putwordAppendix@tie @char\the\appendixno{}%
  \else \ifnum\subsecno=0
     \putwordSection@tie @char\the\appendixno.\the\secno
  \else \ifnum\subsubsecno=0
    \putwordSection@tie @char\the\appendixno.\the\secno.\the\subsecno
  \else
    \putwordSection@tie
      @char\the\appendixno.\the\secno.\the\subsecno.\the\subsubsecno
  \fi\fi\fi
}

% Define \refx{NAME}{SUFFIX} to reference a cross-reference string named NAME.
% If its value is nonempty, SUFFIX is output afterward.
%
\def\refx#1#2{%
  {%
    \indexnofonts
    \otherbackslash
    \expandafter\global\expandafter\let\expandafter\thisrefX
      \csname XR#1\endcsname
  }%
  \ifx\thisrefX\relax
    % If not defined, say something at least.
    \angleleft un\-de\-fined\angleright
    \iflinks
      \ifhavexrefs
        {\toks0 = {#1}% avoid expansion of possibly-complex value
         \message{\linenumber Undefined cross reference `\the\toks0'.}}%
      \else
        \ifwarnedxrefs\else
          \global\warnedxrefstrue
          \message{Cross reference values unknown; you must run TeX again.}%
        \fi
      \fi
    \fi
  \else
    % It's defined, so just use it.
    \thisrefX
  \fi
  #2% Output the suffix in any case.
}

% This is the macro invoked by entries in the aux file.  Usually it's
% just a \def (we prepend XR to the control sequence name to avoid
% collisions).  But if this is a float type, we have more work to do.
%
\def\xrdef#1#2{%
  {% The node name might contain 8-bit characters, which in our current
   % implementation are changed to commands like @'e.  Don't let these
   % mess up the control sequence name.
    \indexnofonts
    \turnoffactive
    \xdef\safexrefname{#1}%
  }%
  %
  \expandafter\gdef\csname XR\safexrefname\endcsname{#2}% remember this xref
  %
  % Was that xref control sequence that we just defined for a float?
  \expandafter\iffloat\csname XR\safexrefname\endcsname
    % it was a float, and we have the (safe) float type in \iffloattype.
    \expandafter\let\expandafter\floatlist
      \csname floatlist\iffloattype\endcsname
    %
    % Is this the first time we've seen this float type?
    \expandafter\ifx\floatlist\relax
      \toks0 = {\do}% yes, so just \do
    \else
      % had it before, so preserve previous elements in list.
      \toks0 = \expandafter{\floatlist\do}%
    \fi
    %
    % Remember this xref in the control sequence \floatlistFLOATTYPE,
    % for later use in \listoffloats.
    \expandafter\xdef\csname floatlist\iffloattype\endcsname{\the\toks0
      {\safexrefname}}%
  \fi
}

% Read the last existing aux file, if any.  No error if none exists.
%
\def\tryauxfile{%
  \openin 1 \jobname.aux
  \ifeof 1 \else
    \readdatafile{aux}%
    \global\havexrefstrue
  \fi
  \closein 1
}

\def\setupdatafile{%
  \catcode`\^^@=\other
  \catcode`\^^A=\other
  \catcode`\^^B=\other
  \catcode`\^^C=\other
  \catcode`\^^D=\other
  \catcode`\^^E=\other
  \catcode`\^^F=\other
  \catcode`\^^G=\other
  \catcode`\^^H=\other
  \catcode`\^^K=\other
  \catcode`\^^L=\other
  \catcode`\^^N=\other
  \catcode`\^^P=\other
  \catcode`\^^Q=\other
  \catcode`\^^R=\other
  \catcode`\^^S=\other
  \catcode`\^^T=\other
  \catcode`\^^U=\other
  \catcode`\^^V=\other
  \catcode`\^^W=\other
  \catcode`\^^X=\other
  \catcode`\^^Z=\other
  \catcode`\^^[=\other
  \catcode`\^^\=\other
  \catcode`\^^]=\other
  \catcode`\^^^=\other
  \catcode`\^^_=\other
  % It was suggested to set the catcode of ^ to 7, which would allow ^^e4 etc.
  % in xref tags, i.e., node names.  But since ^^e4 notation isn't
  % supported in the main text, it doesn't seem desirable.  Furthermore,
  % that is not enough: for node names that actually contain a ^
  % character, we would end up writing a line like this: 'xrdef {'hat
  % b-title}{'hat b} and \xrdef does a \csname...\endcsname on the first
  % argument, and \hat is not an expandable control sequence.  It could
  % all be worked out, but why?  Either we support ^^ or we don't.
  %
  % The other change necessary for this was to define \auxhat:
  % \def\auxhat{\def^{'hat }}% extra space so ok if followed by letter
  % and then to call \auxhat in \setq.
  %
  \catcode`\^=\other
  %
  % Special characters.  Should be turned off anyway, but...
  \catcode`\~=\other
  \catcode`\[=\other
  \catcode`\]=\other
  \catcode`\"=\other
  \catcode`\_=\other
  \catcode`\|=\other
  \catcode`\<=\other
  \catcode`\>=\other
  \catcode`\$=\other
  \catcode`\#=\other
  \catcode`\&=\other
  \catcode`\%=\other
  \catcode`+=\other % avoid \+ for paranoia even though we've turned it off
  %
  % This is to support \ in node names and titles, since the \
  % characters end up in a \csname.  It's easier than
  % leaving it active and making its active definition an actual \
  % character.  What I don't understand is why it works in the *value*
  % of the xrdef.  Seems like it should be a catcode12 \, and that
  % should not typeset properly.  But it works, so I'm moving on for
  % now.  --karl, 15jan04.
  \catcode`\\=\other
  %
  % Make the characters 128-255 be printing characters.
  {%
    \count1=128
    \def\loop{%
      \catcode\count1=\other
      \advance\count1 by 1
      \ifnum \count1<256 \loop \fi
    }%
  }%
  %
  % @ is our escape character in .aux files, and we need braces.
  \catcode`\{=1
  \catcode`\}=2
  \catcode`\@=0
}

\def\readdatafile#1{%
\begingroup
  \setupdatafile
  \input\jobname.#1
\endgroup}


\message{insertions,}
% including footnotes.

\newcount \footnoteno

% The trailing space in the following definition for supereject is
% vital for proper filling; pages come out unaligned when you do a
% pagealignmacro call if that space before the closing brace is
% removed. (Generally, numeric constants should always be followed by a
% space to prevent strange expansion errors.)
\def\supereject{\par\penalty -20000\footnoteno =0 }

% @footnotestyle is meaningful for Info output only.
\let\footnotestyle=\comment

{\catcode `\@=11
%
% Auto-number footnotes.  Otherwise like plain.
\gdef\footnote{%
  \let\indent=\ptexindent
  \let\noindent=\ptexnoindent
  \global\advance\footnoteno by \@ne
  \edef\thisfootno{$^{\the\footnoteno}$}%
  %
  % In case the footnote comes at the end of a sentence, preserve the
  % extra spacing after we do the footnote number.
  \let\@sf\empty
  \ifhmode\edef\@sf{\spacefactor\the\spacefactor}\ptexslash\fi
  %
  % Remove inadvertent blank space before typesetting the footnote number.
  \unskip
  \thisfootno\@sf
  \dofootnote
}%

% Don't bother with the trickery in plain.tex to not require the
% footnote text as a parameter.  Our footnotes don't need to be so general.
%
% Oh yes, they do; otherwise, @ifset (and anything else that uses
% \parseargline) fails inside footnotes because the tokens are fixed when
% the footnote is read.  --karl, 16nov96.
%
\gdef\dofootnote{%
  \insert\footins\bgroup
  % We want to typeset this text as a normal paragraph, even if the
  % footnote reference occurs in (for example) a display environment.
  % So reset some parameters.
  \hsize=\pagewidth
  \interlinepenalty\interfootnotelinepenalty
  \splittopskip\ht\strutbox % top baseline for broken footnotes
  \splitmaxdepth\dp\strutbox
  \floatingpenalty\@MM
  \leftskip\z@skip
  \rightskip\z@skip
  \spaceskip\z@skip
  \xspaceskip\z@skip
  \parindent\defaultparindent
  %
  \smallfonts \rm
  %
  % Because we use hanging indentation in footnotes, a @noindent appears
  % to exdent this text, so make it be a no-op.  makeinfo does not use
  % hanging indentation so @noindent can still be needed within footnote
  % text after an @example or the like (not that this is good style).
  \let\noindent = \relax
  %
  % Hang the footnote text off the number.  Use \everypar in case the
  % footnote extends for more than one paragraph.
  \everypar = {\hang}%
  \textindent{\thisfootno}%
  %
  % Don't crash into the line above the footnote text.  Since this
  % expands into a box, it must come within the paragraph, lest it
  % provide a place where TeX can split the footnote.
  \footstrut
  %
  % Invoke rest of plain TeX footnote routine.
  \futurelet\next\fo@t
}
}%end \catcode `\@=11

% In case a @footnote appears in a vbox, save the footnote text and create
% the real \insert just after the vbox finished.  Otherwise, the insertion
% would be lost.
% Similarly, if a @footnote appears inside an alignment, save the footnote
% text to a box and make the \insert when a row of the table is finished.
% And the same can be done for other insert classes.  --kasal, 16nov03.

% Replace the \insert primitive by a cheating macro.
% Deeper inside, just make sure that the saved insertions are not spilled
% out prematurely.
%
\def\startsavinginserts{%
  \ifx \insert\ptexinsert
    \let\insert\saveinsert
  \else
    \let\checkinserts\relax
  \fi
}

% This \insert replacement works for both \insert\footins{foo} and
% \insert\footins\bgroup foo\egroup, but it doesn't work for \insert27{foo}.
%
\def\saveinsert#1{%
  \edef\next{\noexpand\savetobox \makeSAVEname#1}%
  \afterassignment\next
  % swallow the left brace
  \let\temp =
}
\def\makeSAVEname#1{\makecsname{SAVE\expandafter\gobble\string#1}}
\def\savetobox#1{\global\setbox#1 = \vbox\bgroup \unvbox#1}

\def\checksaveins#1{\ifvoid#1\else \placesaveins#1\fi}

\def\placesaveins#1{%
  \ptexinsert \csname\expandafter\gobblesave\string#1\endcsname
    {\box#1}%
}

% eat @SAVE -- beware, all of them have catcode \other:
{
  \def\dospecials{\do S\do A\do V\do E} \uncatcodespecials  %  ;-)
  \gdef\gobblesave @SAVE{}
}

% initialization:
\def\newsaveins #1{%
  \edef\next{\noexpand\newsaveinsX \makeSAVEname#1}%
  \next
}
\def\newsaveinsX #1{%
  \csname newbox\endcsname #1%
  \expandafter\def\expandafter\checkinserts\expandafter{\checkinserts
    \checksaveins #1}%
}

% initialize:
\let\checkinserts\empty
\newsaveins\footins
\newsaveins\margin


% @image.  We use the macros from epsf.tex to support this.
% If epsf.tex is not installed and @image is used, we complain.
%
% Check for and read epsf.tex up front.  If we read it only at @image
% time, we might be inside a group, and then its definitions would get
% undone and the next image would fail.
\openin 1 = epsf.tex
\ifeof 1 \else
  % Do not bother showing banner with epsf.tex v2.7k (available in
  % doc/epsf.tex and on ctan).
  \def\epsfannounce{\toks0 = }%
  \input epsf.tex
\fi
\closein 1
%
% We will only complain once about lack of epsf.tex.
\newif\ifwarnednoepsf
\newhelp\noepsfhelp{epsf.tex must be installed for images to
  work.  It is also included in the Texinfo distribution, or you can get
  it from ftp://tug.org/tex/epsf.tex.}
%
\def\image#1{%
  \ifx\epsfbox\thisisundefined
    \ifwarnednoepsf \else
      \errhelp = \noepsfhelp
      \errmessage{epsf.tex not found, images will be ignored}%
      \global\warnednoepsftrue
    \fi
  \else
    \imagexxx #1,,,,,\finish
  \fi
}
%
% Arguments to @image:
% #1 is (mandatory) image filename; we tack on .eps extension.
% #2 is (optional) width, #3 is (optional) height.
% #4 is (ignored optional) html alt text.
% #5 is (ignored optional) extension.
% #6 is just the usual extra ignored arg for parsing stuff.
\newif\ifimagevmode
\def\imagexxx#1,#2,#3,#4,#5,#6\finish{\begingroup
  \catcode`\^^M = 5     % in case we're inside an example
  \normalturnoffactive  % allow _ et al. in names
  % If the image is by itself, center it.
  \ifvmode
    \imagevmodetrue
  \else \ifx\centersub\centerV
    % for @center @image, we need a vbox so we can have our vertical space
    \imagevmodetrue
    \vbox\bgroup % vbox has better behavior than vtop herev
  \fi\fi
  %
  \ifimagevmode
    \nobreak\medskip
    % Usually we'll have text after the image which will insert
    % \parskip glue, so insert it here too to equalize the space
    % above and below.
    \nobreak\vskip\parskip
    \nobreak
  \fi
  %
  % Leave vertical mode so that indentation from an enclosing
  %  environment such as @quotation is respected.
  % However, if we're at the top level, we don't want the
  %  normal paragraph indentation.
  % On the other hand, if we are in the case of @center @image, we don't
  %  want to start a paragraph, which will create a hsize-width box and
  %  eradicate the centering.
  \ifx\centersub\centerV\else \noindent \fi
  %
  % Output the image.
  \ifpdf
    \dopdfimage{#1}{#2}{#3}%
  \else
    % \epsfbox itself resets \epsf?size at each figure.
    \setbox0 = \hbox{\ignorespaces #2}\ifdim\wd0 > 0pt \epsfxsize=#2\relax \fi
    \setbox0 = \hbox{\ignorespaces #3}\ifdim\wd0 > 0pt \epsfysize=#3\relax \fi
    \epsfbox{#1.eps}%
  \fi
  %
  \ifimagevmode
    \medskip  % space after a standalone image
  \fi  
  \ifx\centersub\centerV \egroup \fi
\endgroup}


% @float FLOATTYPE,LABEL,LOC ... @end float for displayed figures, tables,
% etc.  We don't actually implement floating yet, we always include the
% float "here".  But it seemed the best name for the future.
%
\envparseargdef\float{\eatcommaspace\eatcommaspace\dofloat#1, , ,\finish}

% There may be a space before second and/or third parameter; delete it.
\def\eatcommaspace#1, {#1,}

% #1 is the optional FLOATTYPE, the text label for this float, typically
% "Figure", "Table", "Example", etc.  Can't contain commas.  If omitted,
% this float will not be numbered and cannot be referred to.
%
% #2 is the optional xref label.  Also must be present for the float to
% be referable.
%
% #3 is the optional positioning argument; for now, it is ignored.  It
% will somehow specify the positions allowed to float to (here, top, bottom).
%
% We keep a separate counter for each FLOATTYPE, which we reset at each
% chapter-level command.
\let\resetallfloatnos=\empty
%
\def\dofloat#1,#2,#3,#4\finish{%
  \let\thiscaption=\empty
  \let\thisshortcaption=\empty
  %
  % don't lose footnotes inside @float.
  %
  % BEWARE: when the floats start float, we have to issue warning whenever an
  % insert appears inside a float which could possibly float. --kasal, 26may04
  %
  \startsavinginserts
  %
  % We can't be used inside a paragraph.
  \par
  %
  \vtop\bgroup
    \def\floattype{#1}%
    \def\floatlabel{#2}%
    \def\floatloc{#3}% we do nothing with this yet.
    %
    \ifx\floattype\empty
      \let\safefloattype=\empty
    \else
      {%
        % the floattype might have accents or other special characters,
        % but we need to use it in a control sequence name.
        \indexnofonts
        \turnoffactive
        \xdef\safefloattype{\floattype}%
      }%
    \fi
    %
    % If label is given but no type, we handle that as the empty type.
    \ifx\floatlabel\empty \else
      % We want each FLOATTYPE to be numbered separately (Figure 1,
      % Table 1, Figure 2, ...).  (And if no label, no number.)
      %
      \expandafter\getfloatno\csname\safefloattype floatno\endcsname
      \global\advance\floatno by 1
      %
      {%
        % This magic value for \lastsection is output by \setref as the
        % XREFLABEL-title value.  \xrefX uses it to distinguish float
        % labels (which have a completely different output format) from
        % node and anchor labels.  And \xrdef uses it to construct the
        % lists of floats.
        %
        \edef\lastsection{\floatmagic=\safefloattype}%
        \setref{\floatlabel}{Yfloat}%
      }%
    \fi
    %
    % start with \parskip glue, I guess.
    \vskip\parskip
    %
    % Don't suppress indentation if a float happens to start a section.
    \restorefirstparagraphindent
}

% we have these possibilities:
% @float Foo,lbl & @caption{Cap}: Foo 1.1: Cap
% @float Foo,lbl & no caption:    Foo 1.1
% @float Foo & @caption{Cap}:     Foo: Cap
% @float Foo & no caption:        Foo
% @float ,lbl & Caption{Cap}:     1.1: Cap
% @float ,lbl & no caption:       1.1
% @float & @caption{Cap}:         Cap
% @float & no caption:
%
\def\Efloat{%
    \let\floatident = \empty
    %
    % In all cases, if we have a float type, it comes first.
    \ifx\floattype\empty \else \def\floatident{\floattype}\fi
    %
    % If we have an xref label, the number comes next.
    \ifx\floatlabel\empty \else
      \ifx\floattype\empty \else % if also had float type, need tie first.
        \appendtomacro\floatident{\tie}%
      \fi
      % the number.
      \appendtomacro\floatident{\chaplevelprefix\the\floatno}%
    \fi
    %
    % Start the printed caption with what we've constructed in
    % \floatident, but keep it separate; we need \floatident again.
    \let\captionline = \floatident
    %
    \ifx\thiscaption\empty \else
      \ifx\floatident\empty \else
	\appendtomacro\captionline{: }% had ident, so need a colon between
      \fi
      %
      % caption text.
      \appendtomacro\captionline{\scanexp\thiscaption}%
    \fi
    %
    % If we have anything to print, print it, with space before.
    % Eventually this needs to become an \insert.
    \ifx\captionline\empty \else
      \vskip.5\parskip
      \captionline
      %
      % Space below caption.
      \vskip\parskip
    \fi
    %
    % If have an xref label, write the list of floats info.  Do this
    % after the caption, to avoid chance of it being a breakpoint.
    \ifx\floatlabel\empty \else
      % Write the text that goes in the lof to the aux file as
      % \floatlabel-lof.  Besides \floatident, we include the short
      % caption if specified, else the full caption if specified, else nothing.
      {%
        \atdummies
        %
        % since we read the caption text in the macro world, where ^^M
        % is turned into a normal character, we have to scan it back, so
        % we don't write the literal three characters "^^M" into the aux file.
	\scanexp{%
	  \xdef\noexpand\gtemp{%
	    \ifx\thisshortcaption\empty
	      \thiscaption
	    \else
	      \thisshortcaption
	    \fi
	  }%
	}%
        \immediate\write\auxfile{@xrdef{\floatlabel-lof}{\floatident
	  \ifx\gtemp\empty \else : \gtemp \fi}}%
      }%
    \fi
  \egroup  % end of \vtop
  %
  % place the captured inserts
  %
  % BEWARE: when the floats start floating, we have to issue warning
  % whenever an insert appears inside a float which could possibly
  % float. --kasal, 26may04
  %
  \checkinserts
}

% Append the tokens #2 to the definition of macro #1, not expanding either.
%
\def\appendtomacro#1#2{%
  \expandafter\def\expandafter#1\expandafter{#1#2}%
}

% @caption, @shortcaption
%
\def\caption{\docaption\thiscaption}
\def\shortcaption{\docaption\thisshortcaption}
\def\docaption{\checkenv\float \bgroup\scanargctxt\defcaption}
\def\defcaption#1#2{\egroup \def#1{#2}}

% The parameter is the control sequence identifying the counter we are
% going to use.  Create it if it doesn't exist and assign it to \floatno.
\def\getfloatno#1{%
  \ifx#1\relax
      % Haven't seen this figure type before.
      \csname newcount\endcsname #1%
      %
      % Remember to reset this floatno at the next chap.
      \expandafter\gdef\expandafter\resetallfloatnos
        \expandafter{\resetallfloatnos #1=0 }%
  \fi
  \let\floatno#1%
}

% \setref calls this to get the XREFLABEL-snt value.  We want an @xref
% to the FLOATLABEL to expand to "Figure 3.1".  We call \setref when we
% first read the @float command.
%
\def\Yfloat{\floattype@tie \chaplevelprefix\the\floatno}%

% Magic string used for the XREFLABEL-title value, so \xrefX can
% distinguish floats from other xref types.
\def\floatmagic{!!float!!}

% #1 is the control sequence we are passed; we expand into a conditional
% which is true if #1 represents a float ref.  That is, the magic
% \lastsection value which we \setref above.
%
\def\iffloat#1{\expandafter\doiffloat#1==\finish}
%
% #1 is (maybe) the \floatmagic string.  If so, #2 will be the
% (safe) float type for this float.  We set \iffloattype to #2.
%
\def\doiffloat#1=#2=#3\finish{%
  \def\temp{#1}%
  \def\iffloattype{#2}%
  \ifx\temp\floatmagic
}

% @listoffloats FLOATTYPE - print a list of floats like a table of contents.
%
\parseargdef\listoffloats{%
  \def\floattype{#1}% floattype
  {%
    % the floattype might have accents or other special characters,
    % but we need to use it in a control sequence name.
    \indexnofonts
    \turnoffactive
    \xdef\safefloattype{\floattype}%
  }%
  %
  % \xrdef saves the floats as a \do-list in \floatlistSAFEFLOATTYPE.
  \expandafter\ifx\csname floatlist\safefloattype\endcsname \relax
    \ifhavexrefs
      % if the user said @listoffloats foo but never @float foo.
      \message{\linenumber No `\safefloattype' floats to list.}%
    \fi
  \else
    \begingroup
      \leftskip=\tocindent  % indent these entries like a toc
      \let\do=\listoffloatsdo
      \csname floatlist\safefloattype\endcsname
    \endgroup
  \fi
}

% This is called on each entry in a list of floats.  We're passed the
% xref label, in the form LABEL-title, which is how we save it in the
% aux file.  We strip off the -title and look up \XRLABEL-lof, which
% has the text we're supposed to typeset here.
%
% Figures without xref labels will not be included in the list (since
% they won't appear in the aux file).
%
\def\listoffloatsdo#1{\listoffloatsdoentry#1\finish}
\def\listoffloatsdoentry#1-title\finish{{%
  % Can't fully expand XR#1-lof because it can contain anything.  Just
  % pass the control sequence.  On the other hand, XR#1-pg is just the
  % page number, and we want to fully expand that so we can get a link
  % in pdf output.
  \toksA = \expandafter{\csname XR#1-lof\endcsname}%
  %
  % use the same \entry macro we use to generate the TOC and index.
  \edef\writeentry{\noexpand\entry{\the\toksA}{\csname XR#1-pg\endcsname}}%
  \writeentry
}}


\message{localization,}

% For single-language documents, @documentlanguage is usually given very
% early, just after @documentencoding.  Single argument is the language
% (de) or locale (de_DE) abbreviation.
%
{
  \catcode`\_ = \active
  \globaldefs=1
\parseargdef\documentlanguage{\begingroup
  \let_=\normalunderscore  % normal _ character for filenames
  \tex % read txi-??.tex file in plain TeX.
    % Read the file by the name they passed if it exists.
    \openin 1 txi-#1.tex
    \ifeof 1
      \documentlanguagetrywithoutunderscore{#1_\finish}%
    \else
      \globaldefs = 1  % everything in the txi-LL files needs to persist
      \input txi-#1.tex
    \fi
    \closein 1
  \endgroup % end raw TeX
\endgroup}
%
% If they passed de_DE, and txi-de_DE.tex doesn't exist,
% try txi-de.tex.
%
\gdef\documentlanguagetrywithoutunderscore#1_#2\finish{%
  \openin 1 txi-#1.tex
  \ifeof 1
    \errhelp = \nolanghelp
    \errmessage{Cannot read language file txi-#1.tex}%
  \else
    \globaldefs = 1  % everything in the txi-LL files needs to persist
    \input txi-#1.tex
  \fi
  \closein 1
}
}% end of special _ catcode
%
\newhelp\nolanghelp{The given language definition file cannot be found or
is empty.  Maybe you need to install it?  Putting it in the current
directory should work if nowhere else does.}

% This macro is called from txi-??.tex files; the first argument is the
% \language name to set (without the "\lang@" prefix), the second and
% third args are \{left,right}hyphenmin.
%
% The language names to pass are determined when the format is built.
% See the etex.log file created at that time, e.g.,
% /usr/local/texlive/2008/texmf-var/web2c/pdftex/etex.log.
%
% With TeX Live 2008, etex now includes hyphenation patterns for all
% available languages.  This means we can support hyphenation in
% Texinfo, at least to some extent.  (This still doesn't solve the
% accented characters problem.)
%
\catcode`@=11
\def\txisetlanguage#1#2#3{%
  % do not set the language if the name is undefined in the current TeX.
  \expandafter\ifx\csname lang@#1\endcsname \relax
    \message{no patterns for #1}%
  \else
    \global\language = \csname lang@#1\endcsname
  \fi
  % but there is no harm in adjusting the hyphenmin values regardless.
  \global\lefthyphenmin = #2\relax
  \global\righthyphenmin = #3\relax
}

% Helpers for encodings.
% Set the catcode of characters 128 through 255 to the specified number.
%
\def\setnonasciicharscatcode#1{%
   \count255=128
   \loop\ifnum\count255<256
      \global\catcode\count255=#1\relax
      \advance\count255 by 1
   \repeat
}

\def\setnonasciicharscatcodenonglobal#1{%
   \count255=128
   \loop\ifnum\count255<256
      \catcode\count255=#1\relax
      \advance\count255 by 1
   \repeat
}

% @documentencoding sets the definition of non-ASCII characters
% according to the specified encoding.
%
\parseargdef\documentencoding{%
  % Encoding being declared for the document.
  \def\declaredencoding{\csname #1.enc\endcsname}%
  %
  % Supported encodings: names converted to tokens in order to be able
  % to compare them with \ifx.
  \def\ascii{\csname US-ASCII.enc\endcsname}%
  \def\latnine{\csname ISO-8859-15.enc\endcsname}%
  \def\latone{\csname ISO-8859-1.enc\endcsname}%
  \def\lattwo{\csname ISO-8859-2.enc\endcsname}%
  \def\utfeight{\csname UTF-8.enc\endcsname}%
  %
  \ifx \declaredencoding \ascii
     \asciichardefs
  %
  \else \ifx \declaredencoding \lattwo
     \setnonasciicharscatcode\active
     \lattwochardefs
  %
  \else \ifx \declaredencoding \latone
     \setnonasciicharscatcode\active
     \latonechardefs
  %
  \else \ifx \declaredencoding \latnine
     \setnonasciicharscatcode\active
     \latninechardefs
  %
  \else \ifx \declaredencoding \utfeight
     \setnonasciicharscatcode\active
     \utfeightchardefs
  %
  \else
    \message{Unknown document encoding #1, ignoring.}%
  %
  \fi % utfeight
  \fi % latnine
  \fi % latone
  \fi % lattwo
  \fi % ascii
}

% A message to be logged when using a character that isn't available
% the default font encoding (OT1).
%
\def\missingcharmsg#1{\message{Character missing in OT1 encoding: #1.}}

% Take account of \c (plain) vs. \, (Texinfo) difference.
\def\cedilla#1{\ifx\c\ptexc\c{#1}\else\,{#1}\fi}

% First, make active non-ASCII characters in order for them to be
% correctly categorized when TeX reads the replacement text of
% macros containing the character definitions.
\setnonasciicharscatcode\active
%
% Latin1 (ISO-8859-1) character definitions.
\def\latonechardefs{%
  \gdef^^a0{\tie}
  \gdef^^a1{\exclamdown}
  \gdef^^a2{\missingcharmsg{CENT SIGN}}
  \gdef^^a3{{\pounds}}
  \gdef^^a4{\missingcharmsg{CURRENCY SIGN}}
  \gdef^^a5{\missingcharmsg{YEN SIGN}}
  \gdef^^a6{\missingcharmsg{BROKEN BAR}}
  \gdef^^a7{\S}
  \gdef^^a8{\"{}}
  \gdef^^a9{\copyright}
  \gdef^^aa{\ordf}
  \gdef^^ab{\guillemetleft}
  \gdef^^ac{$\lnot$}
  \gdef^^ad{\-}
  \gdef^^ae{\registeredsymbol}
  \gdef^^af{\={}}
  %
  \gdef^^b0{\textdegree}
  \gdef^^b1{$\pm$}
  \gdef^^b2{$^2$}
  \gdef^^b3{$^3$}
  \gdef^^b4{\'{}}
  \gdef^^b5{$\mu$}
  \gdef^^b6{\P}
  %
  \gdef^^b7{$^.$}
  \gdef^^b8{\cedilla\ }
  \gdef^^b9{$^1$}
  \gdef^^ba{\ordm}
  %
  \gdef^^bb{\guillemetright}
  \gdef^^bc{$1\over4$}
  \gdef^^bd{$1\over2$}
  \gdef^^be{$3\over4$}
  \gdef^^bf{\questiondown}
  %
  \gdef^^c0{\`A}
  \gdef^^c1{\'A}
  \gdef^^c2{\^A}
  \gdef^^c3{\~A}
  \gdef^^c4{\"A}
  \gdef^^c5{\ringaccent A}
  \gdef^^c6{\AE}
  \gdef^^c7{\cedilla C}
  \gdef^^c8{\`E}
  \gdef^^c9{\'E}
  \gdef^^ca{\^E}
  \gdef^^cb{\"E}
  \gdef^^cc{\`I}
  \gdef^^cd{\'I}
  \gdef^^ce{\^I}
  \gdef^^cf{\"I}
  %
  \gdef^^d0{\DH}
  \gdef^^d1{\~N}
  \gdef^^d2{\`O}
  \gdef^^d3{\'O}
  \gdef^^d4{\^O}
  \gdef^^d5{\~O}
  \gdef^^d6{\"O}
  \gdef^^d7{$\times$}
  \gdef^^d8{\O}
  \gdef^^d9{\`U}
  \gdef^^da{\'U}
  \gdef^^db{\^U}
  \gdef^^dc{\"U}
  \gdef^^dd{\'Y}
  \gdef^^de{\TH}
  \gdef^^df{\ss}
  %
  \gdef^^e0{\`a}
  \gdef^^e1{\'a}
  \gdef^^e2{\^a}
  \gdef^^e3{\~a}
  \gdef^^e4{\"a}
  \gdef^^e5{\ringaccent a}
  \gdef^^e6{\ae}
  \gdef^^e7{\cedilla c}
  \gdef^^e8{\`e}
  \gdef^^e9{\'e}
  \gdef^^ea{\^e}
  \gdef^^eb{\"e}
  \gdef^^ec{\`{\dotless i}}
  \gdef^^ed{\'{\dotless i}}
  \gdef^^ee{\^{\dotless i}}
  \gdef^^ef{\"{\dotless i}}
  %
  \gdef^^f0{\dh}
  \gdef^^f1{\~n}
  \gdef^^f2{\`o}
  \gdef^^f3{\'o}
  \gdef^^f4{\^o}
  \gdef^^f5{\~o}
  \gdef^^f6{\"o}
  \gdef^^f7{$\div$}
  \gdef^^f8{\o}
  \gdef^^f9{\`u}
  \gdef^^fa{\'u}
  \gdef^^fb{\^u}
  \gdef^^fc{\"u}
  \gdef^^fd{\'y}
  \gdef^^fe{\th}
  \gdef^^ff{\"y}
}

% Latin9 (ISO-8859-15) encoding character definitions.
\def\latninechardefs{%
  % Encoding is almost identical to Latin1.
  \latonechardefs
  %
  \gdef^^a4{\euro}
  \gdef^^a6{\v S}
  \gdef^^a8{\v s}
  \gdef^^b4{\v Z}
  \gdef^^b8{\v z}
  \gdef^^bc{\OE}
  \gdef^^bd{\oe}
  \gdef^^be{\"Y}
}

% Latin2 (ISO-8859-2) character definitions.
\def\lattwochardefs{%
  \gdef^^a0{\tie}
  \gdef^^a1{\ogonek{A}}
  \gdef^^a2{\u{}}
  \gdef^^a3{\L}
  \gdef^^a4{\missingcharmsg{CURRENCY SIGN}}
  \gdef^^a5{\v L}
  \gdef^^a6{\'S}
  \gdef^^a7{\S}
  \gdef^^a8{\"{}}
  \gdef^^a9{\v S}
  \gdef^^aa{\cedilla S}
  \gdef^^ab{\v T}
  \gdef^^ac{\'Z}
  \gdef^^ad{\-}
  \gdef^^ae{\v Z}
  \gdef^^af{\dotaccent Z}
  %
  \gdef^^b0{\textdegree}
  \gdef^^b1{\ogonek{a}}
  \gdef^^b2{\ogonek{ }}
  \gdef^^b3{\l}
  \gdef^^b4{\'{}}
  \gdef^^b5{\v l}
  \gdef^^b6{\'s}
  \gdef^^b7{\v{}}
  \gdef^^b8{\cedilla\ }
  \gdef^^b9{\v s}
  \gdef^^ba{\cedilla s}
  \gdef^^bb{\v t}
  \gdef^^bc{\'z}
  \gdef^^bd{\H{}}
  \gdef^^be{\v z}
  \gdef^^bf{\dotaccent z}
  %
  \gdef^^c0{\'R}
  \gdef^^c1{\'A}
  \gdef^^c2{\^A}
  \gdef^^c3{\u A}
  \gdef^^c4{\"A}
  \gdef^^c5{\'L}
  \gdef^^c6{\'C}
  \gdef^^c7{\cedilla C}
  \gdef^^c8{\v C}
  \gdef^^c9{\'E}
  \gdef^^ca{\ogonek{E}}
  \gdef^^cb{\"E}
  \gdef^^cc{\v E}
  \gdef^^cd{\'I}
  \gdef^^ce{\^I}
  \gdef^^cf{\v D}
  %
  \gdef^^d0{\DH}
  \gdef^^d1{\'N}
  \gdef^^d2{\v N}
  \gdef^^d3{\'O}
  \gdef^^d4{\^O}
  \gdef^^d5{\H O}
  \gdef^^d6{\"O}
  \gdef^^d7{$\times$}
  \gdef^^d8{\v R}
  \gdef^^d9{\ringaccent U}
  \gdef^^da{\'U}
  \gdef^^db{\H U}
  \gdef^^dc{\"U}
  \gdef^^dd{\'Y}
  \gdef^^de{\cedilla T}
  \gdef^^df{\ss}
  %
  \gdef^^e0{\'r}
  \gdef^^e1{\'a}
  \gdef^^e2{\^a}
  \gdef^^e3{\u a}
  \gdef^^e4{\"a}
  \gdef^^e5{\'l}
  \gdef^^e6{\'c}
  \gdef^^e7{\cedilla c}
  \gdef^^e8{\v c}
  \gdef^^e9{\'e}
  \gdef^^ea{\ogonek{e}}
  \gdef^^eb{\"e}
  \gdef^^ec{\v e}
  \gdef^^ed{\'{\dotless{i}}}
  \gdef^^ee{\^{\dotless{i}}}
  \gdef^^ef{\v d}
  %
  \gdef^^f0{\dh}
  \gdef^^f1{\'n}
  \gdef^^f2{\v n}
  \gdef^^f3{\'o}
  \gdef^^f4{\^o}
  \gdef^^f5{\H o}
  \gdef^^f6{\"o}
  \gdef^^f7{$\div$}
  \gdef^^f8{\v r}
  \gdef^^f9{\ringaccent u}
  \gdef^^fa{\'u}
  \gdef^^fb{\H u}
  \gdef^^fc{\"u}
  \gdef^^fd{\'y}
  \gdef^^fe{\cedilla t}
  \gdef^^ff{\dotaccent{}}
}

% UTF-8 character definitions.
%
% This code to support UTF-8 is based on LaTeX's utf8.def, with some
% changes for Texinfo conventions.  It is included here under the GPL by
% permission from Frank Mittelbach and the LaTeX team.
%
\newcount\countUTFx
\newcount\countUTFy
\newcount\countUTFz

\gdef\UTFviiiTwoOctets#1#2{\expandafter
   \UTFviiiDefined\csname u8:#1\string #2\endcsname}
%
\gdef\UTFviiiThreeOctets#1#2#3{\expandafter
   \UTFviiiDefined\csname u8:#1\string #2\string #3\endcsname}
%
\gdef\UTFviiiFourOctets#1#2#3#4{\expandafter
   \UTFviiiDefined\csname u8:#1\string #2\string #3\string #4\endcsname}

\gdef\UTFviiiDefined#1{%
  \ifx #1\relax
    \message{\linenumber Unicode char \string #1 not defined for Texinfo}%
  \else
    \expandafter #1%
  \fi
}

\begingroup
  \catcode`\~13
  \catcode`\"12

  \def\UTFviiiLoop{%
    \global\catcode\countUTFx\active
    \uccode`\~\countUTFx
    \uppercase\expandafter{\UTFviiiTmp}%
    \advance\countUTFx by 1
    \ifnum\countUTFx < \countUTFy
      \expandafter\UTFviiiLoop
    \fi}

  \countUTFx = "C2
  \countUTFy = "E0
  \def\UTFviiiTmp{%
    \xdef~{\noexpand\UTFviiiTwoOctets\string~}}
  \UTFviiiLoop

  \countUTFx = "E0
  \countUTFy = "F0
  \def\UTFviiiTmp{%
    \xdef~{\noexpand\UTFviiiThreeOctets\string~}}
  \UTFviiiLoop

  \countUTFx = "F0
  \countUTFy = "F4
  \def\UTFviiiTmp{%
    \xdef~{\noexpand\UTFviiiFourOctets\string~}}
  \UTFviiiLoop
\endgroup

\begingroup
  \catcode`\"=12
  \catcode`\<=12
  \catcode`\.=12
  \catcode`\,=12
  \catcode`\;=12
  \catcode`\!=12
  \catcode`\~=13

  \gdef\DeclareUnicodeCharacter#1#2{%
    \countUTFz = "#1\relax
    %\wlog{\space\space defining Unicode char U+#1 (decimal \the\countUTFz)}%
    \begingroup
      \parseXMLCharref
      \def\UTFviiiTwoOctets##1##2{%
        \csname u8:##1\string ##2\endcsname}%
      \def\UTFviiiThreeOctets##1##2##3{%
        \csname u8:##1\string ##2\string ##3\endcsname}%
      \def\UTFviiiFourOctets##1##2##3##4{%
        \csname u8:##1\string ##2\string ##3\string ##4\endcsname}%
      \expandafter\expandafter\expandafter\expandafter
       \expandafter\expandafter\expandafter
       \gdef\UTFviiiTmp{#2}%
    \endgroup}

  \gdef\parseXMLCharref{%
    \ifnum\countUTFz < "A0\relax
      \errhelp = \EMsimple
      \errmessage{Cannot define Unicode char value < 00A0}%
    \else\ifnum\countUTFz < "800\relax
      \parseUTFviiiA,%
      \parseUTFviiiB C\UTFviiiTwoOctets.,%
    \else\ifnum\countUTFz < "10000\relax
      \parseUTFviiiA;%
      \parseUTFviiiA,%
      \parseUTFviiiB E\UTFviiiThreeOctets.{,;}%
    \else
      \parseUTFviiiA;%
      \parseUTFviiiA,%
      \parseUTFviiiA!%
      \parseUTFviiiB F\UTFviiiFourOctets.{!,;}%
    \fi\fi\fi
  }

  \gdef\parseUTFviiiA#1{%
    \countUTFx = \countUTFz
    \divide\countUTFz by 64
    \countUTFy = \countUTFz
    \multiply\countUTFz by 64
    \advance\countUTFx by -\countUTFz
    \advance\countUTFx by 128
    \uccode `#1\countUTFx
    \countUTFz = \countUTFy}

  \gdef\parseUTFviiiB#1#2#3#4{%
    \advance\countUTFz by "#10\relax
    \uccode `#3\countUTFz
    \uppercase{\gdef\UTFviiiTmp{#2#3#4}}}
\endgroup

\def\utfeightchardefs{%
  \DeclareUnicodeCharacter{00A0}{\tie}
  \DeclareUnicodeCharacter{00A1}{\exclamdown}
  \DeclareUnicodeCharacter{00A3}{\pounds}
  \DeclareUnicodeCharacter{00A8}{\"{ }}
  \DeclareUnicodeCharacter{00A9}{\copyright}
  \DeclareUnicodeCharacter{00AA}{\ordf}
  \DeclareUnicodeCharacter{00AB}{\guillemetleft}
  \DeclareUnicodeCharacter{00AD}{\-}
  \DeclareUnicodeCharacter{00AE}{\registeredsymbol}
  \DeclareUnicodeCharacter{00AF}{\={ }}

  \DeclareUnicodeCharacter{00B0}{\ringaccent{ }}
  \DeclareUnicodeCharacter{00B4}{\'{ }}
  \DeclareUnicodeCharacter{00B8}{\cedilla{ }}
  \DeclareUnicodeCharacter{00BA}{\ordm}
  \DeclareUnicodeCharacter{00BB}{\guillemetright}
  \DeclareUnicodeCharacter{00BF}{\questiondown}

  \DeclareUnicodeCharacter{00C0}{\`A}
  \DeclareUnicodeCharacter{00C1}{\'A}
  \DeclareUnicodeCharacter{00C2}{\^A}
  \DeclareUnicodeCharacter{00C3}{\~A}
  \DeclareUnicodeCharacter{00C4}{\"A}
  \DeclareUnicodeCharacter{00C5}{\AA}
  \DeclareUnicodeCharacter{00C6}{\AE}
  \DeclareUnicodeCharacter{00C7}{\cedilla{C}}
  \DeclareUnicodeCharacter{00C8}{\`E}
  \DeclareUnicodeCharacter{00C9}{\'E}
  \DeclareUnicodeCharacter{00CA}{\^E}
  \DeclareUnicodeCharacter{00CB}{\"E}
  \DeclareUnicodeCharacter{00CC}{\`I}
  \DeclareUnicodeCharacter{00CD}{\'I}
  \DeclareUnicodeCharacter{00CE}{\^I}
  \DeclareUnicodeCharacter{00CF}{\"I}

  \DeclareUnicodeCharacter{00D0}{\DH}
  \DeclareUnicodeCharacter{00D1}{\~N}
  \DeclareUnicodeCharacter{00D2}{\`O}
  \DeclareUnicodeCharacter{00D3}{\'O}
  \DeclareUnicodeCharacter{00D4}{\^O}
  \DeclareUnicodeCharacter{00D5}{\~O}
  \DeclareUnicodeCharacter{00D6}{\"O}
  \DeclareUnicodeCharacter{00D8}{\O}
  \DeclareUnicodeCharacter{00D9}{\`U}
  \DeclareUnicodeCharacter{00DA}{\'U}
  \DeclareUnicodeCharacter{00DB}{\^U}
  \DeclareUnicodeCharacter{00DC}{\"U}
  \DeclareUnicodeCharacter{00DD}{\'Y}
  \DeclareUnicodeCharacter{00DE}{\TH}
  \DeclareUnicodeCharacter{00DF}{\ss}

  \DeclareUnicodeCharacter{00E0}{\`a}
  \DeclareUnicodeCharacter{00E1}{\'a}
  \DeclareUnicodeCharacter{00E2}{\^a}
  \DeclareUnicodeCharacter{00E3}{\~a}
  \DeclareUnicodeCharacter{00E4}{\"a}
  \DeclareUnicodeCharacter{00E5}{\aa}
  \DeclareUnicodeCharacter{00E6}{\ae}
  \DeclareUnicodeCharacter{00E7}{\cedilla{c}}
  \DeclareUnicodeCharacter{00E8}{\`e}
  \DeclareUnicodeCharacter{00E9}{\'e}
  \DeclareUnicodeCharacter{00EA}{\^e}
  \DeclareUnicodeCharacter{00EB}{\"e}
  \DeclareUnicodeCharacter{00EC}{\`{\dotless{i}}}
  \DeclareUnicodeCharacter{00ED}{\'{\dotless{i}}}
  \DeclareUnicodeCharacter{00EE}{\^{\dotless{i}}}
  \DeclareUnicodeCharacter{00EF}{\"{\dotless{i}}}

  \DeclareUnicodeCharacter{00F0}{\dh}
  \DeclareUnicodeCharacter{00F1}{\~n}
  \DeclareUnicodeCharacter{00F2}{\`o}
  \DeclareUnicodeCharacter{00F3}{\'o}
  \DeclareUnicodeCharacter{00F4}{\^o}
  \DeclareUnicodeCharacter{00F5}{\~o}
  \DeclareUnicodeCharacter{00F6}{\"o}
  \DeclareUnicodeCharacter{00F8}{\o}
  \DeclareUnicodeCharacter{00F9}{\`u}
  \DeclareUnicodeCharacter{00FA}{\'u}
  \DeclareUnicodeCharacter{00FB}{\^u}
  \DeclareUnicodeCharacter{00FC}{\"u}
  \DeclareUnicodeCharacter{00FD}{\'y}
  \DeclareUnicodeCharacter{00FE}{\th}
  \DeclareUnicodeCharacter{00FF}{\"y}

  \DeclareUnicodeCharacter{0100}{\=A}
  \DeclareUnicodeCharacter{0101}{\=a}
  \DeclareUnicodeCharacter{0102}{\u{A}}
  \DeclareUnicodeCharacter{0103}{\u{a}}
  \DeclareUnicodeCharacter{0104}{\ogonek{A}}
  \DeclareUnicodeCharacter{0105}{\ogonek{a}}
  \DeclareUnicodeCharacter{0106}{\'C}
  \DeclareUnicodeCharacter{0107}{\'c}
  \DeclareUnicodeCharacter{0108}{\^C}
  \DeclareUnicodeCharacter{0109}{\^c}
  \DeclareUnicodeCharacter{0118}{\ogonek{E}}
  \DeclareUnicodeCharacter{0119}{\ogonek{e}}
  \DeclareUnicodeCharacter{010A}{\dotaccent{C}}
  \DeclareUnicodeCharacter{010B}{\dotaccent{c}}
  \DeclareUnicodeCharacter{010C}{\v{C}}
  \DeclareUnicodeCharacter{010D}{\v{c}}
  \DeclareUnicodeCharacter{010E}{\v{D}}

  \DeclareUnicodeCharacter{0112}{\=E}
  \DeclareUnicodeCharacter{0113}{\=e}
  \DeclareUnicodeCharacter{0114}{\u{E}}
  \DeclareUnicodeCharacter{0115}{\u{e}}
  \DeclareUnicodeCharacter{0116}{\dotaccent{E}}
  \DeclareUnicodeCharacter{0117}{\dotaccent{e}}
  \DeclareUnicodeCharacter{011A}{\v{E}}
  \DeclareUnicodeCharacter{011B}{\v{e}}
  \DeclareUnicodeCharacter{011C}{\^G}
  \DeclareUnicodeCharacter{011D}{\^g}
  \DeclareUnicodeCharacter{011E}{\u{G}}
  \DeclareUnicodeCharacter{011F}{\u{g}}

  \DeclareUnicodeCharacter{0120}{\dotaccent{G}}
  \DeclareUnicodeCharacter{0121}{\dotaccent{g}}
  \DeclareUnicodeCharacter{0124}{\^H}
  \DeclareUnicodeCharacter{0125}{\^h}
  \DeclareUnicodeCharacter{0128}{\~I}
  \DeclareUnicodeCharacter{0129}{\~{\dotless{i}}}
  \DeclareUnicodeCharacter{012A}{\=I}
  \DeclareUnicodeCharacter{012B}{\={\dotless{i}}}
  \DeclareUnicodeCharacter{012C}{\u{I}}
  \DeclareUnicodeCharacter{012D}{\u{\dotless{i}}}

  \DeclareUnicodeCharacter{0130}{\dotaccent{I}}
  \DeclareUnicodeCharacter{0131}{\dotless{i}}
  \DeclareUnicodeCharacter{0132}{IJ}
  \DeclareUnicodeCharacter{0133}{ij}
  \DeclareUnicodeCharacter{0134}{\^J}
  \DeclareUnicodeCharacter{0135}{\^{\dotless{j}}}
  \DeclareUnicodeCharacter{0139}{\'L}
  \DeclareUnicodeCharacter{013A}{\'l}

  \DeclareUnicodeCharacter{0141}{\L}
  \DeclareUnicodeCharacter{0142}{\l}
  \DeclareUnicodeCharacter{0143}{\'N}
  \DeclareUnicodeCharacter{0144}{\'n}
  \DeclareUnicodeCharacter{0147}{\v{N}}
  \DeclareUnicodeCharacter{0148}{\v{n}}
  \DeclareUnicodeCharacter{014C}{\=O}
  \DeclareUnicodeCharacter{014D}{\=o}
  \DeclareUnicodeCharacter{014E}{\u{O}}
  \DeclareUnicodeCharacter{014F}{\u{o}}

  \DeclareUnicodeCharacter{0150}{\H{O}}
  \DeclareUnicodeCharacter{0151}{\H{o}}
  \DeclareUnicodeCharacter{0152}{\OE}
  \DeclareUnicodeCharacter{0153}{\oe}
  \DeclareUnicodeCharacter{0154}{\'R}
  \DeclareUnicodeCharacter{0155}{\'r}
  \DeclareUnicodeCharacter{0158}{\v{R}}
  \DeclareUnicodeCharacter{0159}{\v{r}}
  \DeclareUnicodeCharacter{015A}{\'S}
  \DeclareUnicodeCharacter{015B}{\'s}
  \DeclareUnicodeCharacter{015C}{\^S}
  \DeclareUnicodeCharacter{015D}{\^s}
  \DeclareUnicodeCharacter{015E}{\cedilla{S}}
  \DeclareUnicodeCharacter{015F}{\cedilla{s}}

  \DeclareUnicodeCharacter{0160}{\v{S}}
  \DeclareUnicodeCharacter{0161}{\v{s}}
  \DeclareUnicodeCharacter{0162}{\cedilla{t}}
  \DeclareUnicodeCharacter{0163}{\cedilla{T}}
  \DeclareUnicodeCharacter{0164}{\v{T}}

  \DeclareUnicodeCharacter{0168}{\~U}
  \DeclareUnicodeCharacter{0169}{\~u}
  \DeclareUnicodeCharacter{016A}{\=U}
  \DeclareUnicodeCharacter{016B}{\=u}
  \DeclareUnicodeCharacter{016C}{\u{U}}
  \DeclareUnicodeCharacter{016D}{\u{u}}
  \DeclareUnicodeCharacter{016E}{\ringaccent{U}}
  \DeclareUnicodeCharacter{016F}{\ringaccent{u}}

  \DeclareUnicodeCharacter{0170}{\H{U}}
  \DeclareUnicodeCharacter{0171}{\H{u}}
  \DeclareUnicodeCharacter{0174}{\^W}
  \DeclareUnicodeCharacter{0175}{\^w}
  \DeclareUnicodeCharacter{0176}{\^Y}
  \DeclareUnicodeCharacter{0177}{\^y}
  \DeclareUnicodeCharacter{0178}{\"Y}
  \DeclareUnicodeCharacter{0179}{\'Z}
  \DeclareUnicodeCharacter{017A}{\'z}
  \DeclareUnicodeCharacter{017B}{\dotaccent{Z}}
  \DeclareUnicodeCharacter{017C}{\dotaccent{z}}
  \DeclareUnicodeCharacter{017D}{\v{Z}}
  \DeclareUnicodeCharacter{017E}{\v{z}}

  \DeclareUnicodeCharacter{01C4}{D\v{Z}}
  \DeclareUnicodeCharacter{01C5}{D\v{z}}
  \DeclareUnicodeCharacter{01C6}{d\v{z}}
  \DeclareUnicodeCharacter{01C7}{LJ}
  \DeclareUnicodeCharacter{01C8}{Lj}
  \DeclareUnicodeCharacter{01C9}{lj}
  \DeclareUnicodeCharacter{01CA}{NJ}
  \DeclareUnicodeCharacter{01CB}{Nj}
  \DeclareUnicodeCharacter{01CC}{nj}
  \DeclareUnicodeCharacter{01CD}{\v{A}}
  \DeclareUnicodeCharacter{01CE}{\v{a}}
  \DeclareUnicodeCharacter{01CF}{\v{I}}

  \DeclareUnicodeCharacter{01D0}{\v{\dotless{i}}}
  \DeclareUnicodeCharacter{01D1}{\v{O}}
  \DeclareUnicodeCharacter{01D2}{\v{o}}
  \DeclareUnicodeCharacter{01D3}{\v{U}}
  \DeclareUnicodeCharacter{01D4}{\v{u}}

  \DeclareUnicodeCharacter{01E2}{\={\AE}}
  \DeclareUnicodeCharacter{01E3}{\={\ae}}
  \DeclareUnicodeCharacter{01E6}{\v{G}}
  \DeclareUnicodeCharacter{01E7}{\v{g}}
  \DeclareUnicodeCharacter{01E8}{\v{K}}
  \DeclareUnicodeCharacter{01E9}{\v{k}}

  \DeclareUnicodeCharacter{01F0}{\v{\dotless{j}}}
  \DeclareUnicodeCharacter{01F1}{DZ}
  \DeclareUnicodeCharacter{01F2}{Dz}
  \DeclareUnicodeCharacter{01F3}{dz}
  \DeclareUnicodeCharacter{01F4}{\'G}
  \DeclareUnicodeCharacter{01F5}{\'g}
  \DeclareUnicodeCharacter{01F8}{\`N}
  \DeclareUnicodeCharacter{01F9}{\`n}
  \DeclareUnicodeCharacter{01FC}{\'{\AE}}
  \DeclareUnicodeCharacter{01FD}{\'{\ae}}
  \DeclareUnicodeCharacter{01FE}{\'{\O}}
  \DeclareUnicodeCharacter{01FF}{\'{\o}}

  \DeclareUnicodeCharacter{021E}{\v{H}}
  \DeclareUnicodeCharacter{021F}{\v{h}}

  \DeclareUnicodeCharacter{0226}{\dotaccent{A}}
  \DeclareUnicodeCharacter{0227}{\dotaccent{a}}
  \DeclareUnicodeCharacter{0228}{\cedilla{E}}
  \DeclareUnicodeCharacter{0229}{\cedilla{e}}
  \DeclareUnicodeCharacter{022E}{\dotaccent{O}}
  \DeclareUnicodeCharacter{022F}{\dotaccent{o}}

  \DeclareUnicodeCharacter{0232}{\=Y}
  \DeclareUnicodeCharacter{0233}{\=y}
  \DeclareUnicodeCharacter{0237}{\dotless{j}}

  \DeclareUnicodeCharacter{02DB}{\ogonek{ }}

  \DeclareUnicodeCharacter{1E02}{\dotaccent{B}}
  \DeclareUnicodeCharacter{1E03}{\dotaccent{b}}
  \DeclareUnicodeCharacter{1E04}{\udotaccent{B}}
  \DeclareUnicodeCharacter{1E05}{\udotaccent{b}}
  \DeclareUnicodeCharacter{1E06}{\ubaraccent{B}}
  \DeclareUnicodeCharacter{1E07}{\ubaraccent{b}}
  \DeclareUnicodeCharacter{1E0A}{\dotaccent{D}}
  \DeclareUnicodeCharacter{1E0B}{\dotaccent{d}}
  \DeclareUnicodeCharacter{1E0C}{\udotaccent{D}}
  \DeclareUnicodeCharacter{1E0D}{\udotaccent{d}}
  \DeclareUnicodeCharacter{1E0E}{\ubaraccent{D}}
  \DeclareUnicodeCharacter{1E0F}{\ubaraccent{d}}

  \DeclareUnicodeCharacter{1E1E}{\dotaccent{F}}
  \DeclareUnicodeCharacter{1E1F}{\dotaccent{f}}

  \DeclareUnicodeCharacter{1E20}{\=G}
  \DeclareUnicodeCharacter{1E21}{\=g}
  \DeclareUnicodeCharacter{1E22}{\dotaccent{H}}
  \DeclareUnicodeCharacter{1E23}{\dotaccent{h}}
  \DeclareUnicodeCharacter{1E24}{\udotaccent{H}}
  \DeclareUnicodeCharacter{1E25}{\udotaccent{h}}
  \DeclareUnicodeCharacter{1E26}{\"H}
  \DeclareUnicodeCharacter{1E27}{\"h}

  \DeclareUnicodeCharacter{1E30}{\'K}
  \DeclareUnicodeCharacter{1E31}{\'k}
  \DeclareUnicodeCharacter{1E32}{\udotaccent{K}}
  \DeclareUnicodeCharacter{1E33}{\udotaccent{k}}
  \DeclareUnicodeCharacter{1E34}{\ubaraccent{K}}
  \DeclareUnicodeCharacter{1E35}{\ubaraccent{k}}
  \DeclareUnicodeCharacter{1E36}{\udotaccent{L}}
  \DeclareUnicodeCharacter{1E37}{\udotaccent{l}}
  \DeclareUnicodeCharacter{1E3A}{\ubaraccent{L}}
  \DeclareUnicodeCharacter{1E3B}{\ubaraccent{l}}
  \DeclareUnicodeCharacter{1E3E}{\'M}
  \DeclareUnicodeCharacter{1E3F}{\'m}

  \DeclareUnicodeCharacter{1E40}{\dotaccent{M}}
  \DeclareUnicodeCharacter{1E41}{\dotaccent{m}}
  \DeclareUnicodeCharacter{1E42}{\udotaccent{M}}
  \DeclareUnicodeCharacter{1E43}{\udotaccent{m}}
  \DeclareUnicodeCharacter{1E44}{\dotaccent{N}}
  \DeclareUnicodeCharacter{1E45}{\dotaccent{n}}
  \DeclareUnicodeCharacter{1E46}{\udotaccent{N}}
  \DeclareUnicodeCharacter{1E47}{\udotaccent{n}}
  \DeclareUnicodeCharacter{1E48}{\ubaraccent{N}}
  \DeclareUnicodeCharacter{1E49}{\ubaraccent{n}}

  \DeclareUnicodeCharacter{1E54}{\'P}
  \DeclareUnicodeCharacter{1E55}{\'p}
  \DeclareUnicodeCharacter{1E56}{\dotaccent{P}}
  \DeclareUnicodeCharacter{1E57}{\dotaccent{p}}
  \DeclareUnicodeCharacter{1E58}{\dotaccent{R}}
  \DeclareUnicodeCharacter{1E59}{\dotaccent{r}}
  \DeclareUnicodeCharacter{1E5A}{\udotaccent{R}}
  \DeclareUnicodeCharacter{1E5B}{\udotaccent{r}}
  \DeclareUnicodeCharacter{1E5E}{\ubaraccent{R}}
  \DeclareUnicodeCharacter{1E5F}{\ubaraccent{r}}

  \DeclareUnicodeCharacter{1E60}{\dotaccent{S}}
  \DeclareUnicodeCharacter{1E61}{\dotaccent{s}}
  \DeclareUnicodeCharacter{1E62}{\udotaccent{S}}
  \DeclareUnicodeCharacter{1E63}{\udotaccent{s}}
  \DeclareUnicodeCharacter{1E6A}{\dotaccent{T}}
  \DeclareUnicodeCharacter{1E6B}{\dotaccent{t}}
  \DeclareUnicodeCharacter{1E6C}{\udotaccent{T}}
  \DeclareUnicodeCharacter{1E6D}{\udotaccent{t}}
  \DeclareUnicodeCharacter{1E6E}{\ubaraccent{T}}
  \DeclareUnicodeCharacter{1E6F}{\ubaraccent{t}}

  \DeclareUnicodeCharacter{1E7C}{\~V}
  \DeclareUnicodeCharacter{1E7D}{\~v}
  \DeclareUnicodeCharacter{1E7E}{\udotaccent{V}}
  \DeclareUnicodeCharacter{1E7F}{\udotaccent{v}}

  \DeclareUnicodeCharacter{1E80}{\`W}
  \DeclareUnicodeCharacter{1E81}{\`w}
  \DeclareUnicodeCharacter{1E82}{\'W}
  \DeclareUnicodeCharacter{1E83}{\'w}
  \DeclareUnicodeCharacter{1E84}{\"W}
  \DeclareUnicodeCharacter{1E85}{\"w}
  \DeclareUnicodeCharacter{1E86}{\dotaccent{W}}
  \DeclareUnicodeCharacter{1E87}{\dotaccent{w}}
  \DeclareUnicodeCharacter{1E88}{\udotaccent{W}}
  \DeclareUnicodeCharacter{1E89}{\udotaccent{w}}
  \DeclareUnicodeCharacter{1E8A}{\dotaccent{X}}
  \DeclareUnicodeCharacter{1E8B}{\dotaccent{x}}
  \DeclareUnicodeCharacter{1E8C}{\"X}
  \DeclareUnicodeCharacter{1E8D}{\"x}
  \DeclareUnicodeCharacter{1E8E}{\dotaccent{Y}}
  \DeclareUnicodeCharacter{1E8F}{\dotaccent{y}}

  \DeclareUnicodeCharacter{1E90}{\^Z}
  \DeclareUnicodeCharacter{1E91}{\^z}
  \DeclareUnicodeCharacter{1E92}{\udotaccent{Z}}
  \DeclareUnicodeCharacter{1E93}{\udotaccent{z}}
  \DeclareUnicodeCharacter{1E94}{\ubaraccent{Z}}
  \DeclareUnicodeCharacter{1E95}{\ubaraccent{z}}
  \DeclareUnicodeCharacter{1E96}{\ubaraccent{h}}
  \DeclareUnicodeCharacter{1E97}{\"t}
  \DeclareUnicodeCharacter{1E98}{\ringaccent{w}}
  \DeclareUnicodeCharacter{1E99}{\ringaccent{y}}

  \DeclareUnicodeCharacter{1EA0}{\udotaccent{A}}
  \DeclareUnicodeCharacter{1EA1}{\udotaccent{a}}

  \DeclareUnicodeCharacter{1EB8}{\udotaccent{E}}
  \DeclareUnicodeCharacter{1EB9}{\udotaccent{e}}
  \DeclareUnicodeCharacter{1EBC}{\~E}
  \DeclareUnicodeCharacter{1EBD}{\~e}

  \DeclareUnicodeCharacter{1ECA}{\udotaccent{I}}
  \DeclareUnicodeCharacter{1ECB}{\udotaccent{i}}
  \DeclareUnicodeCharacter{1ECC}{\udotaccent{O}}
  \DeclareUnicodeCharacter{1ECD}{\udotaccent{o}}

  \DeclareUnicodeCharacter{1EE4}{\udotaccent{U}}
  \DeclareUnicodeCharacter{1EE5}{\udotaccent{u}}

  \DeclareUnicodeCharacter{1EF2}{\`Y}
  \DeclareUnicodeCharacter{1EF3}{\`y}
  \DeclareUnicodeCharacter{1EF4}{\udotaccent{Y}}

  \DeclareUnicodeCharacter{1EF8}{\~Y}
  \DeclareUnicodeCharacter{1EF9}{\~y}

  \DeclareUnicodeCharacter{2013}{--}
  \DeclareUnicodeCharacter{2014}{---}
  \DeclareUnicodeCharacter{2018}{\quoteleft}
  \DeclareUnicodeCharacter{2019}{\quoteright}
  \DeclareUnicodeCharacter{201A}{\quotesinglbase}
  \DeclareUnicodeCharacter{201C}{\quotedblleft}
  \DeclareUnicodeCharacter{201D}{\quotedblright}
  \DeclareUnicodeCharacter{201E}{\quotedblbase}
  \DeclareUnicodeCharacter{2022}{\bullet}
  \DeclareUnicodeCharacter{2026}{\dots}
  \DeclareUnicodeCharacter{2039}{\guilsinglleft}
  \DeclareUnicodeCharacter{203A}{\guilsinglright}
  \DeclareUnicodeCharacter{20AC}{\euro}

  \DeclareUnicodeCharacter{2192}{\expansion}
  \DeclareUnicodeCharacter{21D2}{\result}

  \DeclareUnicodeCharacter{2212}{\minus}
  \DeclareUnicodeCharacter{2217}{\point}
  \DeclareUnicodeCharacter{2261}{\equiv}
}% end of \utfeightchardefs


% US-ASCII character definitions.
\def\asciichardefs{% nothing need be done
   \relax
}

% Make non-ASCII characters printable again for compatibility with
% existing Texinfo documents that may use them, even without declaring a
% document encoding.
%
\setnonasciicharscatcode \other


\message{formatting,}

\newdimen\defaultparindent \defaultparindent = 15pt

\chapheadingskip = 15pt plus 4pt minus 2pt
\secheadingskip = 12pt plus 3pt minus 2pt
\subsecheadingskip = 9pt plus 2pt minus 2pt

% Prevent underfull vbox error messages.
\vbadness = 10000

% Don't be very finicky about underfull hboxes, either.
\hbadness = 6666

% Following George Bush, get rid of widows and orphans.
\widowpenalty=10000
\clubpenalty=10000

% Use TeX 3.0's \emergencystretch to help line breaking, but if we're
% using an old version of TeX, don't do anything.  We want the amount of
% stretch added to depend on the line length, hence the dependence on
% \hsize.  We call this whenever the paper size is set.
%
\def\setemergencystretch{%
  \ifx\emergencystretch\thisisundefined
    % Allow us to assign to \emergencystretch anyway.
    \def\emergencystretch{\dimen0}%
  \else
    \emergencystretch = .15\hsize
  \fi
}

% Parameters in order: 1) textheight; 2) textwidth;
% 3) voffset; 4) hoffset; 5) binding offset; 6) topskip;
% 7) physical page height; 8) physical page width.
%
% We also call \setleading{\textleading}, so the caller should define
% \textleading.  The caller should also set \parskip.
%
\def\internalpagesizes#1#2#3#4#5#6#7#8{%
  \voffset = #3\relax
  \topskip = #6\relax
  \splittopskip = \topskip
  %
  \vsize = #1\relax
  \advance\vsize by \topskip
  \outervsize = \vsize
  \advance\outervsize by 2\topandbottommargin
  \pageheight = \vsize
  %
  \hsize = #2\relax
  \outerhsize = \hsize
  \advance\outerhsize by 0.5in
  \pagewidth = \hsize
  %
  \normaloffset = #4\relax
  \bindingoffset = #5\relax
  %
  \ifpdf
    \pdfpageheight #7\relax
    \pdfpagewidth #8\relax
    % if we don't reset these, they will remain at "1 true in" of
    % whatever layout pdftex was dumped with.
    \pdfhorigin = 1 true in
    \pdfvorigin = 1 true in
  \fi
  %
  \setleading{\textleading}
  %
  \parindent = \defaultparindent
  \setemergencystretch
}

% @letterpaper (the default).
\def\letterpaper{{\globaldefs = 1
  \parskip = 3pt plus 2pt minus 1pt
  \textleading = 13.2pt
  %
  % If page is nothing but text, make it come out even.
  \internalpagesizes{607.2pt}{6in}% that's 46 lines
                    {\voffset}{.25in}%
                    {\bindingoffset}{36pt}%
                    {11in}{8.5in}%
}}

% Use @smallbook to reset parameters for 7x9.25 trim size.
\def\smallbook{{\globaldefs = 1
  \parskip = 2pt plus 1pt
  \textleading = 12pt
  %
  \internalpagesizes{7.5in}{5in}%
                    {-.2in}{0in}%
                    {\bindingoffset}{16pt}%
                    {9.25in}{7in}%
  %
  \lispnarrowing = 0.3in
  \tolerance = 700
  \hfuzz = 1pt
  \contentsrightmargin = 0pt
  \defbodyindent = .5cm
}}

% Use @smallerbook to reset parameters for 6x9 trim size.
% (Just testing, parameters still in flux.)
\def\smallerbook{{\globaldefs = 1
  \parskip = 1.5pt plus 1pt
  \textleading = 12pt
  %
  \internalpagesizes{7.4in}{4.8in}%
                    {-.2in}{-.4in}%
                    {0pt}{14pt}%
                    {9in}{6in}%
  %
  \lispnarrowing = 0.25in
  \tolerance = 700
  \hfuzz = 1pt
  \contentsrightmargin = 0pt
  \defbodyindent = .4cm
}}

% Use @afourpaper to print on European A4 paper.
\def\afourpaper{{\globaldefs = 1
  \parskip = 3pt plus 2pt minus 1pt
  \textleading = 13.2pt
  %
  % Double-side printing via postscript on Laserjet 4050
  % prints double-sided nicely when \bindingoffset=10mm and \hoffset=-6mm.
  % To change the settings for a different printer or situation, adjust
  % \normaloffset until the front-side and back-side texts align.  Then
  % do the same for \bindingoffset.  You can set these for testing in
  % your texinfo source file like this:
  % @tex
  % \global\normaloffset = -6mm
  % \global\bindingoffset = 10mm
  % @end tex
  \internalpagesizes{673.2pt}{160mm}% that's 51 lines
                    {\voffset}{\hoffset}%
                    {\bindingoffset}{44pt}%
                    {297mm}{210mm}%
  %
  \tolerance = 700
  \hfuzz = 1pt
  \contentsrightmargin = 0pt
  \defbodyindent = 5mm
}}

% Use @afivepaper to print on European A5 paper.
% From romildo@urano.iceb.ufop.br, 2 July 2000.
% He also recommends making @example and @lisp be small.
\def\afivepaper{{\globaldefs = 1
  \parskip = 2pt plus 1pt minus 0.1pt
  \textleading = 12.5pt
  %
  \internalpagesizes{160mm}{120mm}%
                    {\voffset}{\hoffset}%
                    {\bindingoffset}{8pt}%
                    {210mm}{148mm}%
  %
  \lispnarrowing = 0.2in
  \tolerance = 800
  \hfuzz = 1.2pt
  \contentsrightmargin = 0pt
  \defbodyindent = 2mm
  \tableindent = 12mm
}}

% A specific text layout, 24x15cm overall, intended for A4 paper.
\def\afourlatex{{\globaldefs = 1
  \afourpaper
  \internalpagesizes{237mm}{150mm}%
                    {\voffset}{4.6mm}%
                    {\bindingoffset}{7mm}%
                    {297mm}{210mm}%
  %
  % Must explicitly reset to 0 because we call \afourpaper.
  \globaldefs = 0
}}

% Use @afourwide to print on A4 paper in landscape format.
\def\afourwide{{\globaldefs = 1
  \afourpaper
  \internalpagesizes{241mm}{165mm}%
                    {\voffset}{-2.95mm}%
                    {\bindingoffset}{7mm}%
                    {297mm}{210mm}%
  \globaldefs = 0
}}

% @pagesizes TEXTHEIGHT[,TEXTWIDTH]
% Perhaps we should allow setting the margins, \topskip, \parskip,
% and/or leading, also. Or perhaps we should compute them somehow.
%
\parseargdef\pagesizes{\pagesizesyyy #1,,\finish}
\def\pagesizesyyy#1,#2,#3\finish{{%
  \setbox0 = \hbox{\ignorespaces #2}\ifdim\wd0 > 0pt \hsize=#2\relax \fi
  \globaldefs = 1
  %
  \parskip = 3pt plus 2pt minus 1pt
  \setleading{\textleading}%
  %
  \dimen0 = #1\relax
  \advance\dimen0 by \voffset
  %
  \dimen2 = \hsize
  \advance\dimen2 by \normaloffset
  %
  \internalpagesizes{#1}{\hsize}%
                    {\voffset}{\normaloffset}%
                    {\bindingoffset}{44pt}%
                    {\dimen0}{\dimen2}%
}}

% Set default to letter.
%
\letterpaper


\message{and turning on texinfo input format.}

\def^^L{\par} % remove \outer, so ^L can appear in an @comment

% DEL is a comment character, in case @c does not suffice.
\catcode`\^^? = 14

% Define macros to output various characters with catcode for normal text.
\catcode`\"=\other \def\normaldoublequote{"}
\catcode`\$=\other \def\normaldollar{$}%$ font-lock fix
\catcode`\+=\other \def\normalplus{+}
\catcode`\<=\other \def\normalless{<}
\catcode`\>=\other \def\normalgreater{>}
\catcode`\^=\other \def\normalcaret{^}
\catcode`\_=\other \def\normalunderscore{_}
\catcode`\|=\other \def\normalverticalbar{|}
\catcode`\~=\other \def\normaltilde{~}

% This macro is used to make a character print one way in \tt
% (where it can probably be output as-is), and another way in other fonts,
% where something hairier probably needs to be done.
%
% #1 is what to print if we are indeed using \tt; #2 is what to print
% otherwise.  Since all the Computer Modern typewriter fonts have zero
% interword stretch (and shrink), and it is reasonable to expect all
% typewriter fonts to have this, we can check that font parameter.
%
\def\ifusingtt#1#2{\ifdim \fontdimen3\font=0pt #1\else #2\fi}

% Same as above, but check for italic font.  Actually this also catches
% non-italic slanted fonts since it is impossible to distinguish them from
% italic fonts.  But since this is only used by $ and it uses \sl anyway
% this is not a problem.
\def\ifusingit#1#2{\ifdim \fontdimen1\font>0pt #1\else #2\fi}

% Turn off all special characters except @
% (and those which the user can use as if they were ordinary).
% Most of these we simply print from the \tt font, but for some, we can
% use math or other variants that look better in normal text.

\catcode`\"=\active
\def\activedoublequote{{\tt\char34}}
\let"=\activedoublequote
\catcode`\~=\active
\def~{{\tt\char126}}
\chardef\hat=`\^
\catcode`\^=\active
\def^{{\tt \hat}}

\catcode`\_=\active
\def_{\ifusingtt\normalunderscore\_}
\let\realunder=_
% Subroutine for the previous macro.
\def\_{\leavevmode \kern.07em \vbox{\hrule width.3em height.1ex}\kern .07em }

\catcode`\|=\active
\def|{{\tt\char124}}
\chardef \less=`\<
\catcode`\<=\active
\def<{{\tt \less}}
\chardef \gtr=`\>
\catcode`\>=\active
\def>{{\tt \gtr}}
\catcode`\+=\active
\def+{{\tt \char 43}}
\catcode`\$=\active
\def${\ifusingit{{\sl\$}}\normaldollar}%$ font-lock fix

% If a .fmt file is being used, characters that might appear in a file
% name cannot be active until we have parsed the command line.
% So turn them off again, and have \everyjob (or @setfilename) turn them on.
% \otherifyactive is called near the end of this file.
\def\otherifyactive{\catcode`+=\other \catcode`\_=\other}

% Used sometimes to turn off (effectively) the active characters even after
% parsing them.
\def\turnoffactive{%
  \normalturnoffactive
  \otherbackslash
}

\catcode`\@=0

% \backslashcurfont outputs one backslash character in current font,
% as in \char`\\.
\global\chardef\backslashcurfont=`\\
\global\let\rawbackslashxx=\backslashcurfont  % let existing .??s files work

% \realbackslash is an actual character `\' with catcode other, and
% \doublebackslash is two of them (for the pdf outlines).
{\catcode`\\=\other @gdef@realbackslash{\} @gdef@doublebackslash{\\}}

% In texinfo, backslash is an active character; it prints the backslash
% in fixed width font.
\catcode`\\=\active  % @ for escape char from now on.

% The story here is that in math mode, the \char of \backslashcurfont
% ends up printing the roman \ from the math symbol font (because \char
% in math mode uses the \mathcode, and plain.tex sets
% \mathcode`\\="026E).  It seems better for @backslashchar{} to always
% print a typewriter backslash, hence we use an explicit \mathchar,
% which is the decimal equivalent of "715c (class 7, e.g., use \fam;
% ignored family value; char position "5C).  We can't use " for the
% usual hex value because it has already been made active.
@def@normalbackslash{{@tt @ifmmode @mathchar29020 @else @backslashcurfont @fi}}
@let@backslashchar = @normalbackslash % @backslashchar{} is for user documents.

% On startup, @fixbackslash assigns:
%  @let \ = @normalbackslash
% \rawbackslash defines an active \ to do \backslashcurfont.
% \otherbackslash defines an active \ to be a literal `\' character with
% catcode other.  We switch back and forth between these.
@gdef@rawbackslash{@let\=@backslashcurfont}
@gdef@otherbackslash{@let\=@realbackslash}

% Same as @turnoffactive except outputs \ as {\tt\char`\\} instead of
% the literal character `\'.
%
@def@normalturnoffactive{%
  @let"=@normaldoublequote
  @let$=@normaldollar %$ font-lock fix
  @let+=@normalplus
  @let<=@normalless
  @let>=@normalgreater
  @let\=@normalbackslash
  @let^=@normalcaret
  @let_=@normalunderscore
  @let|=@normalverticalbar
  @let~=@normaltilde
  @markupsetuplqdefault
  @markupsetuprqdefault
  @unsepspaces
}

% Make _ and + \other characters, temporarily.
% This is canceled by @fixbackslash.
@otherifyactive

% If a .fmt file is being used, we don't want the `\input texinfo' to show up.
% That is what \eatinput is for; after that, the `\' should revert to printing
% a backslash.
%
@gdef@eatinput input texinfo{@fixbackslash}
@global@let\ = @eatinput

% On the other hand, perhaps the file did not have a `\input texinfo'. Then
% the first `\' in the file would cause an error. This macro tries to fix
% that, assuming it is called before the first `\' could plausibly occur.
% Also turn back on active characters that might appear in the input
% file name, in case not using a pre-dumped format.
%
@gdef@fixbackslash{%
  @ifx\@eatinput @let\ = @normalbackslash @fi
  @catcode`+=@active
  @catcode`@_=@active
}

% Say @foo, not \foo, in error messages.
@escapechar = `@@

% These (along with & and #) are made active for url-breaking, so need
% active definitions as the normal characters.
@def@normaldot{.}
@def@normalquest{?}
@def@normalslash{/}

% These look ok in all fonts, so just make them not special.
% @hashchar{} gets its own user-level command, because of #line.
@catcode`@& = @other @def@normalamp{&}
@catcode`@# = @other @def@normalhash{#}
@catcode`@% = @other @def@normalpercent{%}

@let @hashchar = @normalhash

@c Finally, make ` and ' active, so that txicodequoteundirected and
@c txicodequotebacktick work right in, e.g., @w{@code{`foo'}}.  If we
@c don't make ` and ' active, @code will not get them as active chars.
@c Do this last of all since we use ` in the previous @catcode assignments.
@catcode`@'=@active
@catcode`@`=@active
@markupsetuplqdefault
@markupsetuprqdefault

@c Local variables:
@c eval: (add-hook 'write-file-hooks 'time-stamp)
@c page-delimiter: "^\\\\message"
@c time-stamp-start: "def\\\\texinfoversion{"
@c time-stamp-format: "%:y-%02m-%02d.%02H"
@c time-stamp-end: "}"
@c End:

@c vim:sw=2:

@ignore
   arch-tag: e1b36e32-c96e-4135-a41a-0b2efa2ea115
@end ignore
43 25144 25145 25146 25147 25148 25149 25150 25151 25152 25153 25154 25155 25156 25157 25158 25159 25160 25161 25162 25163 25164 25165 25166 25167 25168 25169 25170 25171 25172 25173 25174 25175 25176 25177 25178 25179 25180 25181 25182 25183 25184 25185 25186 25187 25188 25189 25190 25191 25192 25193 25194 25195 25196 25197 25198 25199 25200 25201 25202 25203 25204 25205 25206 25207 25208 25209 25210 25211 25212 25213 25214 25215 25216 25217 25218 25219 25220 25221 25222 25223 25224 25225 25226 25227 25228 25229 25230 25231 25232 25233 25234 25235 25236 25237 25238 25239 25240 25241 25242 25243 25244 25245 25246 25247 25248 25249 25250 25251 25252 25253 25254 25255 25256 25257 25258 25259 25260 25261 25262 25263 25264 25265 25266 25267 25268 25269 25270 25271 25272 25273 25274 25275 25276 25277 25278 25279 25280 25281 25282 25283 25284 25285 25286 25287 25288 25289 25290 25291 25292 25293 25294 25295 25296 25297 25298 25299 25300 25301 25302 25303 25304 25305 25306 25307 25308 25309 25310 25311 25312 25313 25314 25315 25316 25317 25318 25319 25320 25321 25322 25323 25324 25325 25326 25327 25328 25329 25330 25331 25332 25333 25334 25335 25336 25337 25338 25339 25340 25341 25342 25343 25344 25345 25346 25347 25348 25349 25350 25351 25352 25353 25354 25355 25356 25357 25358 25359 25360 25361 25362 25363 25364 25365 25366 25367 25368 25369 25370 25371 25372 25373 25374 25375 25376 25377 25378 25379 25380 25381 25382 25383 25384 25385 25386 25387 25388 25389 25390 25391 25392 25393 25394 25395 25396 25397 25398 25399 25400 25401 25402 25403 25404 25405 25406 25407 25408 25409 25410 25411 25412 25413 25414 25415 25416 25417 25418 25419 25420 25421 25422 25423 25424 25425 25426 25427 25428 25429 25430 25431 25432 25433 25434 25435 25436 25437 25438 25439 25440 25441 25442 25443 25444 25445 25446 25447 25448 25449 25450 25451 25452 25453 25454 25455 25456 25457 25458 25459 25460 25461 25462 25463 25464 25465 25466 25467 25468 25469 25470 25471 25472 25473 25474 25475 25476 25477 25478 25479 25480 25481 25482 25483 25484 25485 25486 25487 25488 25489 25490 25491 25492 25493 25494 25495 25496 25497 25498 25499 25500 25501 25502 25503 25504 25505 25506 25507 25508 25509 25510 25511 25512 25513 25514 25515 25516 25517 25518 25519 25520 25521 25522 25523 25524 25525 25526 25527 25528 25529 25530 25531 25532 25533 25534 25535 25536 25537 25538 25539 25540 25541 25542 25543 25544 25545 25546 25547 25548 25549 25550 25551 25552 25553 25554 25555 25556 25557 25558 25559 25560 25561 25562 25563 25564 25565 25566 25567 25568 25569 25570 25571 25572 25573 25574 25575 25576 25577 25578 25579 25580 25581 25582 25583 25584 25585 25586 25587 25588 25589 25590 25591 25592 25593 25594 25595 25596 25597 25598 25599 25600 25601 25602 25603 25604 25605 25606 25607 25608 25609 25610 25611 25612 25613 25614 25615 25616 25617 25618 25619 25620 25621 25622 25623 25624 25625 25626 25627 25628 25629 25630 25631 25632 25633 25634 25635 25636 25637 25638 25639 25640 25641 25642 25643 25644 25645 25646 25647 25648 25649 25650 25651 25652 25653 25654 25655 25656 25657 25658 25659 25660 25661 25662 25663 25664 25665 25666 25667 25668 25669 25670 25671 25672 25673 25674 25675 25676 25677 25678 25679 25680 25681 25682 25683 25684 25685 25686 25687 25688 25689 25690 25691 25692 25693 25694 25695 25696 25697 25698 25699 25700 25701 25702 25703 25704 25705 25706 25707 25708 25709 25710 25711 25712 25713 25714 25715 25716 25717 25718 25719 25720 25721 25722 25723 25724 25725 25726 25727 25728 25729 25730 25731 25732 25733 25734 25735 25736 25737 25738 25739 25740 25741 25742 25743 25744 25745 25746 25747 25748 25749 25750 25751 25752 25753 25754 25755 25756 25757 25758 25759 25760 25761 25762 25763 25764 25765 25766 25767 25768 25769 25770 25771 25772 25773 25774 25775 25776 25777 25778 25779 25780 25781 25782 25783 25784 25785 25786 25787 25788 25789 25790 25791 25792 25793 25794 25795 25796 25797 25798 25799 25800 25801 25802 25803 25804 25805 25806 25807 25808 25809 25810 25811 25812 25813 25814 25815 25816 25817 25818 25819 25820 25821 25822 25823 25824 25825 25826 25827 25828 25829 25830 25831 25832 25833 25834 25835 25836 25837 25838 25839 25840 25841 25842 25843 25844 25845 25846 25847 25848 25849 25850 25851 25852 25853 25854 25855 25856 25857 25858 25859 25860 25861 25862 25863 25864 25865 25866 25867 25868 25869 25870 25871 25872 25873 25874 25875 25876 25877 25878 25879 25880 25881 25882 25883 25884 25885 25886 25887 25888 25889 25890 25891 25892 25893 25894 25895 25896 25897 25898 25899 25900 25901 25902 25903 25904 25905 25906 25907 25908 25909 25910 25911 25912 25913 25914 25915 25916 25917 25918 25919 25920 25921 25922 25923 25924 25925 25926 25927 25928 25929 25930 25931 25932 25933 25934 25935 25936 25937 25938 25939 25940 25941 25942 25943 25944 25945 25946 25947 25948 25949 25950 25951 25952 25953 25954 25955 25956 25957 25958 25959 25960 25961 25962 25963 25964 25965 25966 25967 25968 25969 25970 25971 25972 25973 25974 25975 25976 25977 25978 25979 25980 25981 25982 25983 25984 25985 25986 25987 25988 25989 25990 25991 25992 25993 25994 25995 25996 25997 25998 25999 26000 26001 26002 26003 26004 26005 26006 26007 26008 26009 26010 26011 26012 26013 26014 26015 26016 26017 26018 26019 26020 26021 26022 26023 26024 26025 26026 26027 26028 26029 26030 26031 26032 26033 26034 26035 26036 26037 26038 26039 26040 26041 26042 26043 26044 26045 26046 26047 26048 26049 26050 26051 26052 26053 26054 26055 26056 26057 26058 26059 26060 26061 26062 26063 26064 26065 26066 26067 26068 26069 26070 26071 26072 26073 26074 26075 26076 26077 26078 26079 26080 26081 26082 26083 26084 26085 26086 26087 26088 26089 26090 26091 26092 26093 26094 26095 26096 26097 26098 26099 26100 26101 26102 26103 26104 26105 26106 26107 26108 26109 26110 26111 26112 26113 26114 26115 26116 26117 26118 26119 26120 26121 26122 26123 26124 26125 26126 26127 26128 26129 26130 26131 26132 26133 26134 26135 26136 26137 26138 26139 26140 26141 26142 26143 26144 26145 26146 26147 26148 26149 26150 26151 26152 26153 26154 26155 26156 26157 26158 26159 26160 26161 26162 26163 26164 26165 26166 26167 26168 26169 26170 26171 26172 26173 26174 26175 26176 26177 26178 26179 26180 26181 26182 26183 26184 26185 26186 26187 26188 26189 26190 26191 26192 26193 26194 26195 26196 26197 26198 26199 26200 26201 26202 26203 26204 26205 26206 26207 26208 26209 26210 26211 26212 26213 26214 26215 26216 26217 26218 26219 26220 26221 26222 26223 26224 26225 26226 26227 26228 26229 26230 26231 26232 26233 26234 26235 26236 26237 26238 26239 26240 26241 26242 26243 26244 26245 26246 26247 26248 26249 26250 26251 26252 26253 26254 26255 26256 26257 26258 26259 26260 26261 26262 26263 26264 26265 26266 26267 26268 26269 26270 26271 26272 26273 26274 26275 26276 26277 26278 26279 26280 26281 26282 26283 26284 26285 26286 26287 26288 26289 26290 26291 26292 26293 26294 26295 26296 26297 26298 26299 26300 26301 26302 26303 26304 26305 26306 26307 26308 26309 26310 26311 26312 26313 26314 26315 26316 26317 26318 26319 26320 26321 26322 26323 26324 26325 26326 26327 26328 26329 26330 26331 26332 26333 26334 26335 26336 26337 26338 26339 26340 26341 26342 26343 26344 26345 26346 26347 26348 26349 26350 26351 26352 26353 26354 26355 26356 26357 26358 26359 26360 26361 26362 26363 26364 26365 26366 26367 26368 26369 26370 26371 26372 26373 26374 26375 26376 26377 26378 26379 26380 26381 26382 26383 26384 26385 26386 26387 26388 26389 26390 26391 26392 26393 26394 26395 26396 26397 26398 26399 26400 26401 26402 26403 26404 26405 26406 26407 26408 26409 26410 26411 26412 26413 26414 26415 26416 26417 26418 26419 26420 26421 26422 26423 26424 26425 26426 26427 26428 26429 26430 26431 26432 26433 26434 26435 26436 26437 26438 26439 26440 26441 26442 26443 26444 26445 26446 26447 26448 26449 26450 26451 26452 26453 26454 26455 26456 26457 26458 26459 26460 26461 26462 26463 26464 26465 26466 26467 26468 26469 26470 26471 26472 26473 26474 26475 26476 26477 26478 26479 26480 26481 26482 26483 26484 26485 26486 26487 26488 26489 26490 26491 26492 26493 26494 26495 26496 26497 26498 26499 26500 26501 26502 26503 26504 26505 26506 26507 26508 26509 26510 26511 26512 26513 26514 26515 26516 26517 26518 26519 26520 26521 26522 26523 26524 26525 26526 26527 26528 26529 26530 26531 26532 26533 26534 26535 26536 26537 26538 26539 26540 26541 26542 26543 26544 26545 26546 26547 26548 26549 26550 26551 26552 26553 26554 26555 26556 26557 26558 26559 26560 26561 26562 26563 26564 26565 26566 26567 26568 26569 26570 26571 26572 26573 26574 26575 26576 26577 26578 26579 26580 26581 26582 26583 26584 26585 26586 26587 26588 26589 26590 26591 26592 26593 26594 26595 26596 26597 26598 26599 26600 26601 26602 26603 26604 26605 26606 26607 26608 26609 26610 26611 26612 26613 26614 26615 26616 26617 26618 26619 26620 26621 26622 26623 26624 26625 26626 26627 26628 26629 26630 26631 26632 26633 26634 26635 26636 26637 26638 26639 26640 26641 26642 26643 26644 26645 26646 26647 26648 26649 26650 26651 26652 26653 26654 26655 26656 26657 26658 26659 26660 26661 26662 26663 26664 26665 26666 26667 26668 26669 26670 26671 26672 26673 26674 26675 26676 26677 26678 26679 26680 26681 26682 26683 26684 26685 26686 26687 26688 26689 26690 26691 26692 26693 26694 26695 26696 26697 26698 26699 26700 26701 26702 26703 26704 26705 26706 26707 26708 26709 26710 26711 26712 26713 26714 26715 26716 26717 26718 26719 26720 26721 26722 26723 26724 26725 26726 26727 26728 26729 26730 26731 26732 26733 26734 26735 26736 26737 26738 26739 26740 26741 26742 26743 26744 26745 26746 26747 26748 26749 26750 26751 26752 26753 26754 26755 26756 26757 26758 26759 26760 26761 26762 26763 26764 26765 26766 26767 26768 26769 26770 26771 26772 26773 26774 26775 26776 26777 26778 26779 26780 26781 26782 26783 26784 26785 26786 26787 26788 26789 26790 26791 26792 26793 26794 26795 26796 26797 26798 26799 26800 26801 26802 26803 26804 26805 26806 26807 26808 26809 26810 26811 26812 26813 26814 26815 26816 26817 26818 26819 26820 26821 26822 26823 26824 26825 26826 26827 26828 26829 26830 26831 26832 26833 26834 26835 26836 26837 26838 26839 26840 26841 26842 26843 26844 26845 26846 26847 26848 26849 26850 26851 26852 26853 26854 26855 26856 26857 26858 26859 26860 26861 26862 26863 26864 26865 26866 26867 26868 26869 26870 26871 26872 26873 26874 26875 26876 26877 26878 26879 26880 26881 26882 26883 26884 26885 26886 26887 26888 26889 26890 26891 26892 26893 26894 26895 26896 26897 26898 26899 26900 26901 26902 26903 26904 26905 26906 26907 26908 26909 26910 26911 26912 26913 26914 26915 26916 26917 26918 26919 26920 26921 26922 26923 26924 26925 26926 26927 26928 26929 26930 26931 26932 26933 26934 26935 26936 26937 26938 26939 26940 26941 26942 26943 26944 26945 26946 26947 26948 26949 26950 26951 26952 26953 26954 26955 26956 26957 26958 26959 26960 26961 26962 26963 26964 26965 26966 26967 26968 26969 26970 26971 26972 26973 26974 26975 26976 26977 26978 26979 26980 26981 26982 26983 26984 26985 26986 26987 26988 26989 26990 26991 26992 26993 26994 26995 26996 26997 26998 26999 27000 27001 27002 27003 27004 27005 27006 27007 27008 27009 27010 27011 27012 27013 27014 27015 27016 27017 27018 27019 27020 27021 27022 27023 27024 27025 27026 27027 27028 27029 27030 27031 27032 27033 27034 27035 27036 27037 27038 27039 27040 27041 27042 27043 27044 27045 27046 27047 27048 27049 27050 27051 27052 27053 27054 27055 27056 27057 27058 27059 27060 27061 27062 27063 27064 27065 27066 27067 27068 27069 27070 27071 27072 27073 27074 27075 27076 27077 27078 27079 27080 27081 27082 27083 27084 27085 27086 27087 27088 27089 27090 27091 27092 27093 27094 27095 27096 27097 27098 27099 27100 27101 27102 27103 27104 27105 27106 27107 27108 27109 27110 27111 27112 27113 27114 27115 27116 27117 27118 27119 27120 27121 27122 27123 27124 27125 27126 27127 27128 27129 27130 27131 27132 27133 27134 27135 27136 27137 27138 27139 27140 27141 27142 27143 27144 27145 27146 27147 27148 27149 27150 27151 27152 27153 27154 27155 27156 27157 27158 27159 27160 27161 27162 27163 27164 27165 27166 27167 27168 27169 27170 27171 27172 27173 27174 27175 27176 27177 27178 27179 27180 27181 27182 27183 27184 27185 27186 27187 27188 27189 27190 27191 27192 27193 27194 27195 27196 27197 27198 27199 27200 27201 27202 27203 27204 27205 27206 27207 27208 27209 27210 27211 27212 27213 27214 27215 27216 27217 27218 27219 27220 27221 27222 27223 27224 27225 27226 27227 27228 27229 27230 27231 27232 27233 27234 27235 27236 27237 27238 27239 27240 27241 27242 27243 27244 27245 27246 27247 27248 27249 27250 27251 27252 27253 27254 27255 27256 27257 27258 27259 27260 27261 27262 27263 27264 27265 27266 27267 27268 27269 27270 27271 27272 27273 27274 27275 27276 27277 27278 27279 27280 27281 27282 27283 27284 27285 27286 27287 27288 27289 27290 27291 27292 27293 27294 27295 27296 27297 27298 27299 27300 27301 27302 27303 27304 27305 27306 27307 27308 27309 27310 27311 27312 27313 27314 27315 27316 27317 27318 27319 27320 27321 27322 27323 27324 27325 27326 27327 27328 27329 27330 27331 27332 27333 27334 27335 27336 27337 27338 27339 27340 27341 27342 27343 27344 27345 27346 27347 27348 27349 27350 27351 27352 27353 27354 27355 27356 27357 27358 27359 27360 27361 27362 27363 27364 27365 27366 27367 27368 27369 27370 27371 27372 27373 27374 27375 27376 27377 27378 27379 27380 27381 27382 27383 27384 27385 27386 27387 27388 27389 27390 27391 27392 27393 27394 27395 27396 27397 27398 27399 27400 27401 27402 27403 27404 27405 27406 27407 27408 27409 27410 27411 27412 27413 27414 27415 27416 27417 27418 27419 27420 27421 27422 27423 27424 27425 27426 27427 27428 27429 27430 27431 27432 27433 27434 27435 27436 27437 27438 27439 27440 27441 27442 27443 27444 27445 27446 27447 27448 27449 27450 27451 27452 27453 27454 27455 27456 27457 27458 27459 27460 27461 27462 27463 27464 27465 27466 27467 27468 27469 27470 27471 27472 27473 27474 27475 27476 27477 27478 27479 27480 27481 27482 27483 27484 27485 27486 27487 27488 27489 27490 27491 27492 27493 27494 27495 27496 27497 27498 27499 27500 27501 27502 27503 27504 27505 27506 27507 27508 27509 27510 27511 27512 27513 27514 27515 27516 27517 27518 27519 27520 27521 27522 27523 27524 27525 27526 27527 27528 27529 27530 27531 27532 27533 27534 27535 27536 27537 27538 27539 27540 27541 27542 27543 27544 27545 27546 27547 27548 27549 27550 27551 27552 27553 27554 27555 27556 27557 27558 27559 27560 27561 27562 27563 27564 27565 27566 27567 27568 27569 27570 27571 27572 27573 27574 27575 27576 27577 27578 27579 27580 27581 27582 27583 27584 27585 27586 27587 27588 27589 27590 27591 27592 27593 27594 27595 27596 27597 27598 27599 27600 27601 27602 27603 27604 27605 27606 27607 27608 27609 27610 27611 27612 27613 27614 27615 27616 27617 27618 27619 27620 27621 27622 27623 27624 27625 27626 27627 27628 27629 27630 27631 27632 27633 27634 27635 27636 27637 27638 27639 27640 27641 27642 27643 27644 27645 27646 27647 27648 27649 27650 27651 27652 27653 27654 27655 27656 27657 27658 27659 27660 27661 27662 27663 27664 27665 27666 27667 27668 27669 27670 27671 27672 27673 27674 27675 27676 27677 27678 27679 27680 27681 27682 27683 27684 27685 27686 27687 27688 27689 27690 27691 27692 27693 27694 27695 27696 27697 27698 27699 27700 27701 27702 27703 27704 27705 27706 27707 27708 27709 27710 27711 27712 27713 27714 27715 27716 27717 27718 27719 27720 27721 27722 27723 27724 27725 27726 27727 27728 27729 27730 27731 27732 27733 27734 27735 27736 27737 27738 27739 27740 27741 27742 27743 27744 27745 27746 27747 27748 27749 27750 27751 27752 27753 27754 27755 27756 27757 27758 27759 27760 27761 27762 27763 27764 27765 27766 27767 27768 27769 27770 27771 27772 27773 27774 27775 27776 27777 27778 27779 27780 27781 27782 27783 27784 27785 27786 27787 27788 27789 27790 27791 27792 27793 27794 27795 27796 27797 27798 27799 27800 27801 27802 27803 27804 27805 27806 27807 27808 27809 27810 27811 27812 27813 27814 27815 27816 27817 27818 27819 27820 27821 27822 27823 27824 27825 27826 27827 27828 27829 27830 27831 27832 27833 27834 27835 27836 27837 27838 27839 27840 27841 27842 27843 27844 27845 27846 27847 27848 27849 27850 27851 27852 27853 27854 27855 27856 27857 27858 27859 27860 27861 27862 27863 27864 27865 27866 27867 27868 27869 27870 27871 27872 27873 27874 27875 27876 27877 27878 27879 27880 27881 27882 27883 27884 27885 27886 27887 27888 27889 27890 27891 27892 27893 27894 27895 27896 27897 27898 27899 27900 27901 27902 27903 27904 27905 27906 27907 27908 27909 27910 27911 27912 27913 27914 27915 27916 27917 27918 27919 27920 27921 27922 27923 27924 27925 27926 27927 27928 27929 27930 27931 27932 27933 27934 27935 27936 27937 27938 27939 27940 27941 27942 27943 27944 27945 27946 27947 27948 27949 27950 27951 27952 27953 27954 27955 27956 27957 27958 27959 27960 27961 27962 27963 27964 27965 27966 27967 27968 27969 27970 27971 27972 27973 27974 27975 27976 27977 27978 27979 27980 27981 27982 27983 27984 27985 27986 27987 27988 27989 27990 27991 27992 27993 27994 27995 27996 27997 27998 27999 28000 28001 28002 28003 28004 28005 28006 28007 28008 28009 28010 28011 28012 28013 28014 28015 28016 28017 28018 28019 28020 28021 28022 28023 28024 28025 28026 28027 28028 28029 28030 28031 28032 28033 28034 28035 28036 28037 28038 28039 28040 28041 28042 28043 28044 28045 28046 28047 28048 28049 28050 28051 28052 28053 28054 28055 28056 28057 28058 28059 28060 28061 28062 28063 28064 28065 28066 28067 28068 28069 28070 28071 28072 28073 28074 28075 28076 28077 28078 28079 28080 28081 28082 28083 28084 28085 28086 28087 28088 28089 28090 28091 28092 28093 28094 28095 28096 28097 28098 28099 28100 28101 28102 28103 28104 28105 28106 28107 28108 28109 28110 28111 28112 28113 28114 28115 28116 28117 28118 28119 28120 28121 28122 28123 28124 28125 28126 28127 28128 28129 28130 28131 28132 28133 28134 28135 28136 28137 28138 28139 28140 28141 28142 28143 28144 28145 28146 28147 28148 28149 28150 28151 28152 28153 28154 28155 28156 28157 28158 28159 28160 28161 28162 28163 28164 28165 28166 28167 28168 28169 28170 28171 28172 28173 28174 28175 28176 28177 28178 28179 28180 28181 28182 28183 28184 28185 28186 28187 28188 28189 28190 28191 28192 28193 28194 28195 28196 28197 28198 28199 28200 28201 28202 28203 28204 28205 28206 28207 28208 28209 28210 28211 28212 28213 28214 28215 28216 28217 28218 28219 28220 28221 28222 28223 28224 28225 28226 28227 28228 28229 28230 28231 28232 28233 28234 28235 28236 28237 28238 28239 28240 28241 28242 28243 28244 28245 28246 28247 28248 28249 28250 28251 28252 28253 28254 28255 28256 28257 28258 28259 28260 28261 28262 28263 28264 28265 28266 28267 28268 28269 28270 28271 28272 28273 28274 28275 28276 28277 28278 28279 28280 28281 28282 28283 28284 28285 28286 28287 28288 28289 28290 28291 28292 28293 28294 28295 28296 28297 28298 28299 28300 28301 28302 28303 28304 28305 28306 28307 28308 28309 28310 28311 28312 28313 28314 28315 28316 28317 28318 28319 28320 28321 28322 28323 28324 28325 28326 28327 28328 28329 28330 28331 28332 28333 28334 28335 28336 28337 28338 28339 28340 28341 28342 28343 28344 28345 28346 28347 28348 28349 28350 28351 28352 28353 28354 28355 28356 28357 28358 28359 28360 28361 28362 28363 28364 28365 28366 28367 28368 28369 28370 28371 28372 28373 28374 28375 28376 28377 28378 28379 28380 28381 28382 28383 28384 28385 28386 28387 28388 28389 28390 28391 28392 28393 28394 28395 28396 28397 28398 28399 28400 28401 28402 28403 28404 28405 28406 28407 28408 28409 28410 28411 28412 28413 28414 28415 28416 28417 28418 28419 28420 28421 28422 28423 28424 28425 28426 28427 28428 28429 28430 28431 28432 28433 28434 28435 28436 28437 28438 28439 28440 28441 28442 28443 28444 28445 28446 28447 28448 28449 28450 28451 28452 28453 28454 28455 28456 28457 28458 28459 28460 28461 28462 28463 28464 28465 28466 28467 28468 28469 28470 28471 28472 28473 28474 28475 28476 28477 28478 28479 28480 28481 28482 28483 28484 28485 28486 28487 28488 28489 28490 28491 28492 28493 28494 28495 28496 28497 28498 28499 28500 28501 28502 28503 28504 28505 28506 28507 28508 28509 28510 28511 28512 28513 28514 28515 28516 28517 28518 28519 28520 28521 28522 28523 28524 28525 28526 28527 28528 28529 28530 28531 28532 28533 28534 28535 28536 28537 28538 28539 28540 28541 28542 28543 28544 28545 28546 28547 28548 28549 28550 28551 28552 28553 28554 28555 28556 28557 28558 28559 28560 28561 28562 28563 28564 28565 28566 28567 28568 28569 28570 28571 28572 28573 28574 28575 28576 28577 28578 28579 28580 28581 28582 28583 28584 28585 28586 28587 28588 28589 28590 28591 28592 28593 28594 28595 28596 28597 28598 28599 28600 28601 28602 28603 28604 28605 28606 28607 28608 28609 28610 28611 28612 28613 28614 28615 28616 28617 28618 28619 28620 28621 28622 28623 28624 28625 28626 28627 28628 28629 28630 28631 28632 28633 28634 28635 28636 28637 28638 28639 28640 28641 28642 28643 28644 28645 28646 28647 28648 28649 28650 28651 28652 28653 28654 28655 28656 28657 28658 28659 28660 28661 28662 28663 28664 28665 28666 28667 28668 28669 28670 28671 28672 28673 28674 28675 28676 28677 28678 28679 28680 28681 28682 28683 28684 28685 28686 28687 28688 28689 28690 28691 28692 28693 28694 28695 28696 28697 28698 28699 28700 28701 28702 28703 28704 28705 28706 28707 28708 28709 28710 28711 28712 28713 28714 28715 28716 28717 28718 28719 28720 28721 28722 28723 28724 28725 28726 28727 28728 28729 28730 28731 28732 28733 28734 28735 28736 28737 28738 28739 28740 28741 28742 28743 28744 28745 28746 28747 28748 28749 28750 28751 28752 28753 28754 28755 28756 28757 28758 28759 28760 28761 28762 28763 28764 28765 28766 28767 28768 28769 28770 28771 28772 28773 28774 28775 28776 28777 28778 28779 28780 28781 28782 28783 28784 28785 28786 28787 28788 28789 28790 28791 28792 28793 28794 28795 28796 28797 28798 28799 28800 28801 28802 28803 28804 28805 28806 28807 28808 28809 28810 28811 28812 28813 28814 28815 28816 28817 28818 28819 28820 28821 28822 28823 28824 28825 28826 28827 28828 28829 28830 28831 28832 28833 28834 28835 28836 28837 28838 28839 28840 28841 28842 28843 28844 28845 28846 28847 28848 28849 28850 28851 28852 28853 28854 28855 28856 28857 28858 28859 28860 28861 28862 28863 28864 28865 28866 28867 28868 28869 28870 28871 28872 28873 28874 28875 28876 28877 28878 28879 28880 28881 28882 28883 28884 28885 28886 28887 28888 28889 28890 28891 28892 28893 28894 28895 28896 28897 28898 28899 28900 28901 28902 28903 28904 28905 28906 28907 28908 28909 28910 28911 28912 28913 28914 28915 28916 28917 28918 28919 28920 28921 28922 28923 28924 28925 28926 28927 28928 28929 28930 28931 28932 28933 28934 28935 28936 28937 28938 28939 28940 28941 28942 28943 28944 28945 28946 28947 28948 28949 28950 28951 28952 28953 28954 28955 28956 28957 28958 28959 28960 28961 28962 28963 28964 28965 28966 28967 28968 28969 28970 28971 28972 28973 28974 28975 28976 28977 28978 28979 28980 28981 28982 28983 28984 28985 28986 28987 28988 28989 28990 28991 28992 28993 28994 28995 28996 28997 28998 28999 29000 29001 29002 29003 29004 29005 29006 29007 29008 29009 29010 29011 29012 29013 29014 29015 29016 29017 29018 29019 29020 29021 29022 29023 29024 29025 29026 29027 29028 29029 29030 29031 29032 29033 29034 29035 29036 29037 29038 29039 29040 29041 29042 29043 29044 29045 29046 29047 29048 29049 29050 29051 29052 29053 29054 29055 29056 29057 29058 29059 29060 29061 29062 29063 29064 29065 29066 29067 29068 29069 29070 29071 29072 29073 29074 29075 29076 29077 29078 29079 29080 29081 29082 29083 29084 29085 29086 29087 29088 29089 29090 29091 29092 29093 29094 29095 29096 29097 29098 29099 29100 29101 29102 29103 29104 29105 29106 29107 29108 29109 29110 29111 29112 29113 29114 29115 29116 29117 29118 29119 29120 29121 29122 29123 29124 29125 29126 29127 29128 29129 29130 29131 29132 29133 29134 29135 29136 29137 29138 29139 29140 29141 29142 29143 29144 29145 29146 29147 29148 29149 29150 29151 29152 29153 29154 29155 29156 29157 29158 29159 29160 29161 29162 29163 29164 29165 29166 29167 29168 29169 29170 29171 29172 29173 29174 29175 29176 29177 29178 29179 29180 29181 29182 29183 29184 29185 29186 29187 29188 29189 29190 29191 29192 29193 29194 29195 29196 29197 29198 29199 29200 29201 29202 29203 29204 29205 29206 29207 29208 29209 29210 29211 29212 29213 29214 29215 29216 29217 29218 29219 29220 29221 29222 29223 29224 29225 29226 29227 29228 29229 29230 29231 29232 29233 29234 29235 29236 29237 29238 29239 29240 29241 29242 29243 29244 29245 29246 29247 29248 29249 29250 29251 29252 29253 29254 29255 29256 29257 29258 29259 29260 29261 29262 29263 29264 29265 29266 29267 29268 29269 29270 29271 29272 29273 29274 29275 29276 29277 29278 29279 29280 29281 29282 29283 29284 29285 29286 29287 29288 29289 29290 29291 29292 29293 29294 29295 29296 29297 29298 29299 29300 29301 29302 29303 29304 29305 29306 29307 29308 29309 29310 29311 29312 29313 29314 29315 29316 29317 29318 29319 29320 29321 29322 29323 29324 29325 29326 29327 29328 29329 29330 29331 29332 29333 29334 29335 29336 29337 29338 29339 29340 29341 29342 29343 29344 29345 29346 29347 29348 29349 29350 29351 29352 29353 29354 29355 29356 29357 29358 29359 29360 29361 29362 29363 29364 29365 29366 29367 29368 29369 29370 29371 29372 29373 29374 29375 29376 29377 29378 29379 29380 29381 29382 29383 29384 29385 29386 29387 29388 29389 29390 29391 29392 29393 29394 29395 29396 29397 29398 29399 29400 29401 29402 29403 29404 29405 29406 29407 29408 29409 29410 29411 29412 29413 29414 29415 29416 29417 29418 29419 29420 29421 29422 29423 29424 29425 29426 29427 29428 29429 29430 29431 29432 29433 29434 29435 29436 29437 29438 29439 29440 29441 29442 29443 29444 29445 29446 29447 29448 29449 29450 29451 29452 29453 29454 29455 29456 29457 29458 29459 29460 29461 29462 29463 29464 29465 29466 29467 29468 29469 29470 29471 29472 29473 29474 29475 29476 29477 29478 29479 29480 29481 29482 29483 29484 29485 29486 29487 29488 29489 29490 29491 29492 29493 29494 29495 29496 29497 29498 29499 29500 29501 29502 29503 29504 29505 29506 29507 29508 29509 29510 29511 29512 29513 29514 29515 29516 29517 29518 29519 29520 29521 29522 29523 29524 29525 29526 29527 29528 29529 29530 29531 29532 29533 29534 29535 29536 29537 29538 29539 29540 29541 29542 29543 29544 29545 29546 29547 29548 29549 29550 29551 29552 29553 29554 29555 29556 29557 29558 29559 29560 29561 29562 29563 29564 29565 29566 29567 29568 29569 29570 29571 29572 29573 29574 29575 29576 29577 29578 29579 29580 29581 29582 29583 29584 29585 29586 29587 29588 29589 29590 29591 29592 29593 29594 29595 29596 29597 29598 29599 29600 29601 29602 29603 29604 29605 29606 29607 29608 29609 29610 29611 29612 29613 29614 29615 29616 29617 29618 29619 29620 29621 29622 29623 29624 29625 29626 29627 29628 29629 29630 29631 29632 29633 29634 29635 29636 29637 29638 29639 29640 29641 29642 29643 29644 29645 29646 29647 29648 29649 29650 29651 29652 29653 29654 29655 29656 29657 29658 29659 29660 29661 29662 29663 29664 29665 29666 29667 29668 29669 29670 29671 29672 29673 29674 29675 29676 29677 29678 29679 29680 29681 29682 29683 29684 29685 29686 29687 29688 29689 29690 29691 29692 29693 29694 29695 29696 29697 29698 29699 29700 29701 29702 29703 29704 29705 29706 29707 29708 29709 29710 29711 29712 29713 29714 29715 29716 29717 29718 29719 29720 29721 29722 29723 29724 29725 29726 29727 29728 29729 29730 29731 29732 29733 29734 29735 29736 29737 29738 29739 29740 29741 29742 29743 29744 29745 29746 29747 29748 29749 29750 29751 29752 29753 29754 29755 29756 29757 29758 29759 29760 29761 29762 29763 29764 29765 29766 29767 29768 29769 29770 29771 29772 29773 29774 29775 29776 29777 29778 29779 29780 29781 29782 29783 29784 29785 29786 29787 29788 29789 29790 29791 29792 29793 29794 29795 29796 29797 29798 29799 29800 29801 29802 29803 29804 29805 29806 29807 29808 29809 29810 29811 29812 29813 29814 29815 29816 29817 29818 29819 29820 29821 29822 29823 29824 29825 29826 29827 29828 29829 29830 29831 29832 29833 29834 29835 29836 29837 29838 29839 29840 29841 29842 29843 29844 29845 29846 29847 29848 29849 29850 29851 29852 29853 29854 29855 29856 29857 29858 29859 29860 29861 29862 29863 29864 29865 29866 29867 29868 29869 29870 29871 29872 29873 29874 29875 29876 29877 29878 29879 29880 29881 29882 29883 29884 29885 29886 29887 29888 29889 29890 29891 29892 29893 29894 29895 29896 29897 29898 29899 29900 29901 29902 29903 29904 29905 29906 29907 29908 29909 29910 29911 29912 29913 29914 29915 29916 29917 29918 29919 29920 29921 29922 29923 29924 29925 29926 29927 29928 29929 29930 29931 29932 29933 29934 29935 29936 29937 29938 29939 29940 29941 29942 29943 29944 29945 29946 29947 29948 29949 29950 29951 29952 29953 29954 29955 29956 29957 29958 29959 29960 29961 29962 29963 29964 29965 29966 29967 29968 29969 29970 29971 29972 29973 29974 29975 29976 29977 29978 29979 29980 29981 29982 29983 29984 29985 29986 29987 29988 29989 29990 29991 29992 29993 29994 29995 29996 29997 29998 29999 30000 30001 30002 30003 30004 30005 30006 30007 30008 30009 30010 30011 30012 30013 30014 30015 30016 30017 30018 30019 30020 30021 30022 30023 30024 30025 30026 30027 30028 30029 30030 30031 30032 30033 30034 30035 30036 30037 30038 30039 30040 30041 30042 30043 30044 30045 30046 30047 30048 30049 30050 30051 30052 30053 30054 30055 30056 30057 30058 30059 30060 30061 30062 30063 30064 30065 30066 30067 30068 30069 30070 30071 30072 30073 30074 30075 30076 30077 30078 30079 30080 30081 30082 30083 30084 30085 30086 30087 30088 30089 30090 30091 30092 30093 30094 30095 30096 30097 30098 30099 30100 30101 30102 30103 30104 30105 30106 30107 30108 30109 30110 30111 30112 30113 30114 30115 30116 30117 30118 30119 30120 30121 30122 30123 30124 30125 30126 30127 30128 30129 30130 30131 30132 30133 30134 30135 30136 30137 30138 30139 30140 30141 30142 30143 30144 30145 30146 30147 30148 30149 30150 30151 30152 30153 30154 30155 30156 30157 30158 30159 30160 30161 30162 30163 30164 30165 30166 30167 30168 30169 30170 30171 30172 30173 30174 30175 30176 30177 30178 30179 30180 30181 30182 30183 30184 30185 30186 30187 30188 30189 30190 30191 30192 30193 30194 30195 30196 30197 30198 30199 30200 30201 30202 30203 30204 30205 30206 30207 30208 30209 30210 30211 30212 30213 30214 30215 30216 30217 30218 30219 30220 30221 30222 30223 30224 30225 30226 30227 30228 30229 30230 30231 30232 30233 30234 30235 30236 30237 30238 30239 30240 30241 30242 30243 30244 30245 30246 30247 30248 30249 30250 30251 30252 30253 30254 30255 30256 30257 30258 30259 30260 30261 30262 30263 30264 30265 30266 30267 30268 30269 30270 30271 30272 30273 30274 30275 30276 30277 30278 30279 30280 30281 30282 30283 30284 30285 30286 30287 30288 30289 30290 30291 30292 30293 30294 30295 30296 30297 30298 30299 30300 30301 30302 30303 30304 30305 30306 30307 30308 30309 30310 30311 30312 30313 30314 30315 30316 30317 30318 30319 30320 30321 30322 30323 30324 30325 30326 30327 30328 30329 30330 30331 30332 30333 30334 30335 30336 30337 30338 30339 30340 30341 30342 30343 30344 30345 30346 30347 30348 30349 30350 30351 30352 30353 30354 30355 30356 30357 30358 30359 30360 30361 30362 30363 30364 30365 30366 30367 30368 30369 30370 30371 30372 30373 30374 30375 30376 30377 30378 30379 30380 30381 30382 30383 30384 30385 30386 30387 30388 30389 30390 30391 30392 30393 30394 30395 30396 30397 30398 30399 30400 30401 30402 30403 30404 30405 30406 30407 30408 30409 30410 30411 30412 30413 30414 30415 30416 30417 30418 30419 30420 30421 30422 30423 30424 30425 30426 30427 30428 30429 30430 30431 30432 30433 30434 30435 30436 30437 30438 30439 30440 30441 30442 30443 30444 30445 30446 30447 30448 30449 30450 30451 30452 30453 30454 30455 30456 30457 30458 30459 30460 30461 30462 30463 30464 30465 30466 30467 30468 30469 30470 30471 30472 30473 30474 30475 30476 30477 30478 30479 30480 30481 30482 30483 30484 30485 30486 30487 30488 30489 30490 30491 30492 30493 30494 30495 30496 30497 30498 30499 30500 30501 30502 30503 30504 30505 30506 30507 30508 30509 30510 30511 30512 30513 30514 30515 30516 30517 30518 30519 30520 30521 30522 30523 30524 30525 30526 30527 30528 30529 30530 30531 30532 30533 30534 30535 30536 30537 30538 30539 30540 30541 30542 30543 30544 30545 30546 30547 30548 30549 30550 30551 30552 30553 30554 30555 30556 30557 30558 30559 30560 30561 30562 30563 30564 30565 30566 30567 30568 30569 30570 30571 30572 30573 30574 30575 30576 30577 30578 30579 30580 30581 30582 30583 30584 30585 30586 30587 30588 30589 30590 30591 30592 30593 30594 30595 30596 30597 30598 30599 30600 30601 30602 30603 30604 30605 30606 30607 30608 30609 30610 30611 30612 30613 30614 30615 30616 30617 30618 30619 30620 30621 30622 30623 30624 30625 30626 30627 30628 30629 30630 30631 30632 30633 30634 30635 30636 30637 30638 30639 30640 30641 30642 30643 30644 30645 30646 30647 30648 30649 30650 30651 30652 30653 30654 30655 30656 30657 30658 30659 30660 30661 30662 30663 30664 30665 30666 30667 30668 30669 30670 30671 30672 30673 30674 30675 30676 30677 30678 30679 30680 30681 30682 30683 30684 30685 30686 30687 30688 30689 30690 30691 30692 30693 30694 30695 30696 30697 30698 30699 30700 30701 30702 30703 30704 30705 30706 30707 30708 30709 30710 30711 30712 30713 30714 30715 30716 30717 30718 30719 30720 30721 30722 30723 30724 30725 30726 30727 30728 30729 30730 30731 30732 30733 30734 30735 30736 30737 30738 30739 30740 30741 30742 30743 30744 30745 30746 30747 30748 30749 30750 30751 30752 30753 30754 30755 30756 30757 30758 30759 30760 30761 30762 30763 30764 30765 30766 30767 30768 30769 30770 30771 30772 30773 30774 30775 30776 30777 30778 30779 30780 30781 30782 30783 30784 30785 30786 30787 30788 30789 30790 30791 30792 30793 30794 30795 30796 30797 30798 30799 30800 30801 30802 30803 30804 30805 30806 30807 30808 30809 30810 30811 30812 30813 30814 30815 30816 30817 30818 30819 30820 30821 30822 30823 30824 30825 30826 30827 30828 30829 30830 30831 30832 30833 30834 30835 30836 30837 30838 30839 30840 30841 30842 30843 30844 30845 30846 30847 30848 30849 30850 30851 30852 30853 30854 30855 30856 30857 30858 30859 30860 30861 30862 30863 30864 30865 30866 30867 30868 30869 30870 30871 30872 30873 30874 30875 30876 30877 30878 30879 30880 30881 30882 30883 30884 30885 30886 30887 30888 30889 30890 30891 30892 30893 30894 30895 30896 30897 30898 30899 30900 30901 30902 30903 30904 30905 30906 30907 30908 30909 30910 30911 30912 30913 30914 30915 30916 30917 30918 30919 30920 30921 30922 30923 30924 30925 30926 30927 30928 30929 30930 30931 30932 30933 30934 30935 30936 30937 30938 30939 30940 30941 30942 30943 30944 30945 30946 30947 30948 30949 30950 30951 30952 30953 30954 30955 30956 30957 30958 30959 30960 30961 30962 30963 30964 30965 30966 30967 30968 30969 30970 30971 30972 30973 30974 30975 30976 30977 30978 30979 30980 30981 30982 30983 30984 30985 30986 30987 30988 30989 30990 30991 30992 30993 30994 30995 30996 30997 30998 30999 31000 31001 31002 31003 31004 31005 31006 31007 31008 31009 31010 31011 31012 31013 31014 31015 31016 31017 31018 31019 31020 31021 31022 31023 31024 31025 31026 31027 31028 31029 31030 31031 31032 31033 31034 31035 31036 31037 31038 31039 31040 31041 31042 31043 31044 31045 31046 31047 31048 31049 31050 31051 31052 31053 31054 31055 31056 31057 31058 31059 31060 31061 31062 31063 31064 31065 31066 31067 31068 31069 31070 31071 31072 31073 31074 31075 31076 31077 31078 31079 31080 31081 31082 31083 31084 31085 31086 31087 31088 31089 31090 31091 31092 31093 31094 31095 31096 31097 31098 31099 31100 31101 31102 31103 31104 31105 31106 31107 31108 31109 31110 31111 31112 31113 31114 31115 31116 31117 31118 31119 31120 31121 31122 31123 31124 31125 31126 31127 31128 31129 31130 31131 31132 31133 31134 31135 31136 31137 31138 31139 31140 31141 31142 31143 31144 31145 31146 31147 31148 31149 31150 31151 31152 31153 31154 31155 31156 31157 31158 31159 31160 31161 31162 31163 31164 31165 31166 31167 31168 31169 31170 31171 31172 31173 31174 31175 31176 31177 31178 31179 31180 31181 31182 31183 31184 31185 31186 31187 31188 31189 31190 31191 31192 31193 31194 31195 31196 31197 31198 31199 31200 31201 31202 31203 31204 31205 31206 31207 31208 31209 31210 31211 31212 31213 31214 31215 31216 31217 31218 31219 31220 31221 31222 31223 31224 31225 31226 31227 31228 31229 31230 31231 31232 31233 31234 31235 31236 31237 31238 31239 31240 31241 31242 31243 31244 31245 31246 31247 31248 31249 31250 31251 31252 31253 31254 31255 31256 31257 31258 31259 31260 31261 31262 31263 31264 31265 31266 31267 31268 31269 31270 31271 31272 31273 31274 31275 31276 31277 31278 31279 31280 31281 31282 31283 31284 31285 31286 31287 31288 31289 31290 31291 31292 31293 31294 31295 31296 31297 31298 31299 31300 31301 31302 31303 31304 31305 31306 31307 31308 31309 31310 31311 31312 31313 31314 31315 31316 31317 31318 31319 31320 31321 31322 31323 31324 31325 31326 31327 31328 31329 31330 31331 31332 31333 31334 31335 31336 31337 31338 31339 31340 31341 31342 31343 31344 31345 31346 31347 31348 31349 31350 31351 31352 31353 31354 31355 31356 31357 31358 31359 31360 31361 31362 31363 31364 31365 31366 31367 31368 31369 31370 31371 31372 31373 31374 31375 31376 31377 31378 31379 31380 31381 31382 31383 31384 31385 31386 31387 31388 31389 31390 31391 31392 31393 31394 31395 31396 31397 31398 31399 31400 31401 31402 31403 31404 31405 31406 31407 31408 31409 31410 31411 31412 31413 31414 31415 31416 31417 31418 31419 31420 31421 31422 31423 31424 31425 31426 31427 31428 31429 31430 31431 31432 31433 31434 31435 31436 31437 31438 31439 31440 31441 31442 31443 31444 31445 31446 31447 31448 31449 31450 31451 31452 31453 31454 31455 31456 31457 31458 31459 31460 31461 31462 31463 31464 31465 31466 31467 31468 31469 31470 31471 31472 31473 31474 31475 31476 31477 31478 31479 31480 31481 31482 31483 31484 31485 31486 31487 31488 31489 31490 31491 31492 31493 31494 31495 31496 31497 31498 31499 31500 31501 31502 31503 31504 31505 31506 31507 31508 31509 31510 31511 31512 31513 31514 31515 31516 31517 31518 31519 31520 31521 31522 31523 31524 31525 31526 31527 31528 31529 31530 31531 31532 31533 31534 31535 31536 31537 31538 31539 31540 31541 31542 31543 31544 31545 31546 31547 31548 31549 31550 31551 31552 31553 31554 31555 31556 31557 31558 31559 31560 31561 31562 31563 31564 31565 31566 31567 31568 31569 31570 31571 31572 31573 31574 31575 31576 31577 31578 31579 31580 31581 31582 31583 31584 31585 31586 31587 31588 31589 31590 31591 31592 31593 31594 31595 31596 31597 31598 31599 31600 31601 31602 31603 31604 31605 31606 31607 31608 31609 31610 31611 31612 31613 31614 31615 31616 31617 31618 31619 31620 31621 31622 31623 31624 31625 31626 31627 31628 31629 31630 31631 31632 31633 31634 31635 31636 31637 31638 31639 31640 31641 31642 31643 31644 31645 31646 31647 31648 31649 31650 31651 31652 31653 31654 31655 31656 31657 31658 31659 31660 31661 31662 31663 31664 31665 31666 31667 31668 31669 31670 31671 31672 31673 31674 31675 31676 31677 31678 31679 31680 31681 31682 31683 31684 31685 31686 31687 31688 31689 31690 31691 31692 31693 31694 31695 31696 31697 31698 31699 31700 31701 31702 31703 31704 31705 31706 31707 31708 31709 31710 31711 31712 31713 31714 31715 31716 31717 31718 31719 31720 31721 31722 31723 31724 31725 31726 31727 31728 31729 31730 31731 31732 31733 31734 31735 31736 31737 31738 31739 31740 31741 31742 31743 31744 31745 31746 31747 31748 31749 31750 31751 31752 31753 31754 31755 31756 31757 31758 31759 31760 31761 31762 31763 31764 31765 31766 31767 31768 31769 31770 31771 31772 31773 31774 31775 31776 31777 31778 31779 31780 31781 31782 31783 31784 31785 31786 31787 31788 31789 31790 31791 31792 31793 31794 31795 31796 31797 31798 31799 31800 31801 31802 31803 31804 31805 31806 31807 31808 31809 31810 31811 31812 31813 31814 31815 31816 31817 31818 31819 31820 31821 31822 31823 31824 31825 31826 31827 31828 31829 31830 31831 31832 31833 31834 31835 31836 31837 31838 31839 31840 31841 31842 31843 31844 31845 31846 31847 31848 31849 31850 31851 31852 31853 31854 31855 31856 31857 31858 31859 31860 31861 31862 31863 31864 31865 31866 31867 31868 31869 31870 31871 31872 31873 31874 31875 31876 31877 31878 31879 31880 31881 31882 31883 31884 31885 31886 31887 31888 31889 31890 31891 31892 31893 31894 31895 31896 31897 31898 31899 31900 31901 31902 31903 31904 31905 31906 31907 31908 31909 31910 31911 31912 31913 31914 31915 31916 31917 31918 31919 31920 31921 31922 31923 31924 31925 31926 31927 31928 31929 31930 31931 31932 31933 31934 31935 31936 31937 31938 31939 31940 31941 31942 31943 31944 31945 31946 31947 31948 31949 31950 31951 31952 31953 31954 31955 31956 31957 31958 31959 31960 31961 31962 31963 31964 31965 31966 31967 31968 31969 31970 31971 31972 31973 31974 31975 31976 31977 31978 31979 31980 31981 31982 31983 31984 31985 31986 31987 31988 31989 31990 31991 31992 31993 31994 31995 31996 31997 31998 31999 32000 32001 32002 32003 32004 32005 32006 32007 32008 32009 32010 32011 32012 32013 32014 32015 32016 32017 32018 32019 32020 32021 32022 32023 32024 32025 32026 32027 32028 32029 32030 32031 32032 32033 32034 32035 32036 32037 32038 32039 32040 32041 32042 32043 32044 32045 32046 32047 32048 32049 32050 32051 32052 32053 32054 32055 32056 32057 32058 32059 32060 32061 32062 32063 32064 32065 32066 32067 32068 32069 32070 32071 32072 32073 32074 32075 32076 32077 32078 32079 32080 32081 32082 32083 32084 32085 32086 32087 32088 32089 32090 32091 32092 32093 32094 32095 32096 32097 32098 32099 32100 32101 32102 32103 32104 32105 32106 32107 32108 32109 32110 32111 32112 32113 32114 32115 32116 32117 32118 32119 32120 32121 32122 32123 32124 32125 32126 32127 32128 32129 32130 32131 32132 32133 32134 32135 32136 32137 32138 32139 32140 32141 32142 32143 32144 32145 32146 32147 32148 32149 32150 32151 32152 32153 32154 32155 32156 32157 32158 32159 32160 32161 32162 32163 32164 32165 32166 32167 32168 32169 32170 32171 32172 32173 32174 32175 32176 32177 32178 32179 32180 32181 32182 32183 32184 32185 32186 32187 32188 32189 32190 32191 32192 32193 32194 32195 32196 32197 32198 32199 32200 32201 32202 32203 32204 32205 32206 32207 32208 32209 32210 32211 32212 32213 32214 32215 32216 32217 32218 32219 32220 32221 32222 32223 32224 32225 32226 32227 32228 32229 32230 32231 32232 32233 32234 32235 32236 32237 32238 32239 32240 32241 32242 32243 32244 32245 32246 32247 32248 32249 32250 32251 32252 32253 32254 32255 32256 32257 32258 32259 32260 32261 32262 32263 32264 32265 32266 32267 32268 32269 32270 32271 32272 32273 32274 32275 32276 32277 32278 32279 32280 32281 32282 32283 32284 32285 32286 32287 32288 32289 32290 32291 32292 32293 32294 32295 32296 32297 32298 32299 32300 32301 32302 32303 32304 32305 32306 32307 32308 32309 32310 32311 32312 32313 32314 32315 32316 32317 32318 32319 32320 32321 32322 32323 32324 32325 32326 32327 32328 32329 32330 32331 32332 32333 32334 32335 32336 32337 32338 32339 32340 32341 32342 32343 32344 32345 32346 32347 32348 32349 32350 32351 32352 32353 32354 32355 32356 32357 32358 32359 32360 32361 32362 32363 32364 32365 32366 32367 32368 32369 32370 32371 32372 32373 32374 32375 32376 32377 32378 32379 32380 32381 32382 32383 32384 32385 32386 32387 32388 32389 32390 32391 32392 32393 32394 32395 32396 32397 32398 32399 32400 32401 32402 32403 32404 32405 32406 32407 32408 32409 32410 32411 32412 32413 32414 32415 32416 32417 32418 32419 32420 32421 32422 32423 32424 32425 32426 32427 32428 32429 32430 32431 32432 32433 32434 32435 32436 32437 32438 32439 32440 32441 32442 32443 32444 32445 32446 32447 32448 32449 32450 32451 32452 32453 32454 32455 32456 32457 32458 32459 32460 32461 32462 32463 32464 32465 32466 32467 32468 32469 32470 32471 32472 32473 32474 32475 32476 32477 32478 32479 32480 32481 32482 32483 32484 32485 32486 32487 32488 32489 32490 32491 32492 32493 32494 32495 32496 32497 32498 32499 32500 32501 32502 32503 32504 32505 32506 32507 32508 32509 32510 32511 32512 32513 32514 32515 32516 32517 32518 32519 32520 32521 32522 32523 32524 32525 32526 32527 32528 32529 32530 32531 32532 32533 32534 32535 32536 32537 32538 32539 32540 32541 32542 32543 32544 32545 32546 32547 32548 32549 32550 32551 32552 32553 32554 32555 32556 32557 32558 32559 32560 32561 32562 32563 32564 32565 32566 32567 32568 32569 32570 32571 32572 32573 32574 32575 32576 32577 32578 32579 32580 32581 32582 32583 32584 32585 32586 32587 32588 32589 32590 32591 32592 32593 32594 32595 32596 32597 32598 32599 32600 32601 32602 32603 32604 32605 32606 32607 32608 32609 32610 32611 32612 32613 32614 32615 32616 32617 32618 32619 32620 32621 32622 32623 32624 32625 32626 32627 32628 32629 32630 32631 32632 32633 32634 32635 32636 32637 32638 32639 32640 32641 32642 32643 32644 32645 32646 32647 32648 32649 32650 32651 32652 32653 32654 32655 32656 32657 32658 32659 32660 32661 32662 32663 32664 32665 32666 32667 32668 32669 32670 32671 32672 32673 32674 32675 32676 32677 32678 32679 32680 32681 32682 32683 32684 32685 32686 32687 32688 32689 32690 32691 32692 32693 32694 32695 32696 32697 32698 32699 32700 32701 32702 32703 32704 32705 32706 32707 32708 32709 32710 32711 32712 32713 32714 32715 32716 32717 32718 32719 32720 32721 32722 32723 32724 32725 32726 32727 32728 32729 32730 32731 32732 32733 32734 32735 32736 32737 32738 32739 32740 32741 32742 32743 32744 32745 32746 32747 32748 32749 32750 32751 32752 32753 32754 32755 32756 32757 32758 32759 32760 32761 32762 32763 32764 32765 32766 32767 32768 32769 32770 32771 32772 32773 32774 32775 32776 32777 32778 32779 32780 32781 32782 32783 32784 32785 32786 32787 32788 32789 32790 32791 32792 32793 32794 32795 32796 32797 32798 32799 32800 32801 32802 32803 32804 32805 32806 32807 32808 32809 32810 32811 32812 32813 32814 32815 32816 32817 32818 32819 32820 32821 32822 32823 32824 32825 32826 32827 32828 32829 32830 32831 32832 32833 32834 32835 32836 32837 32838 32839 32840 32841 32842 32843 32844 32845 32846 32847 32848 32849 32850 32851 32852 32853 32854 32855 32856 32857 32858 32859 32860 32861 32862 32863 32864 32865 32866 32867 32868 32869 32870 32871 32872 32873 32874 32875 32876 32877 32878 32879 32880 32881 32882 32883 32884 32885 32886 32887 32888 32889 32890 32891 32892 32893 32894 32895 32896 32897 32898 32899 32900 32901 32902 32903 32904 32905 32906 32907 32908 32909 32910 32911 32912 32913 32914 32915 32916 32917 32918 32919 32920 32921 32922 32923 32924 32925 32926 32927 32928 32929 32930 32931 32932 32933 32934 32935 32936 32937 32938 32939 32940 32941 32942 32943 32944 32945 32946 32947 32948 32949 32950 32951 32952 32953 32954 32955 32956 32957 32958 32959 32960 32961 32962 32963 32964 32965 32966 32967 32968 32969 32970 32971 32972 32973 32974 32975 32976 32977 32978 32979 32980 32981 32982 32983 32984 32985 32986 32987 32988 32989 32990 32991 32992 32993 32994 32995 32996 32997 32998 32999 33000 33001 33002 33003 33004 33005 33006 33007 33008 33009 33010 33011 33012 33013 33014 33015 33016 33017 33018 33019 33020 33021 33022 33023 33024 33025 33026 33027 33028 33029 33030 33031 33032 33033 33034 33035 33036 33037 33038 33039 33040 33041 33042 33043 33044 33045 33046 33047 33048 33049 33050 33051 33052 33053 33054 33055 33056 33057 33058 33059 33060 33061 33062 33063 33064 33065 33066 33067 33068 33069 33070 33071 33072 33073 33074 33075 33076 33077 33078 33079 33080 33081 33082 33083 33084 33085 33086 33087 33088 33089 33090 33091 33092 33093 33094 33095 33096 33097 33098 33099 33100 33101 33102 33103 33104 33105 33106 33107 33108 33109 33110 33111 33112 33113 33114 33115 33116 33117 33118 33119 33120 33121 33122 33123 33124 33125 33126 33127 33128 33129 33130 33131 33132 33133 33134 33135 33136 33137 33138 33139 33140 33141 33142 33143 33144 33145 33146 33147 33148 33149 33150 33151 33152 33153 33154 33155 33156 33157 33158 33159 33160 33161 33162 33163 33164 33165 33166 33167 33168 33169 33170 33171 33172 33173 33174 33175 33176 33177 33178 33179 33180 33181 33182 33183 33184 33185 33186 33187 33188 33189 33190 33191 33192 33193 33194 33195 33196 33197 33198 33199 33200 33201 33202 33203 33204 33205 33206 33207 33208 33209 33210 33211 33212 33213 33214 33215 33216 33217 33218 33219 33220 33221 33222 33223 33224 33225 33226 33227 33228 33229 33230 33231 33232 33233 33234 33235 33236 33237 33238 33239 33240 33241 33242 33243 33244 33245 33246 33247 33248 33249 33250 33251 33252 33253 33254 33255 33256 33257 33258 33259 33260 33261 33262 33263 33264 33265 33266 33267 33268 33269 33270 33271 33272 33273 33274 33275 33276 33277 33278 33279 33280 33281 33282 33283 33284 33285 33286 33287 33288 33289 33290 33291 33292 33293 33294 33295 33296 33297 33298 33299 33300 33301 33302 33303 33304 33305 33306 33307 33308 33309 33310 33311 33312 33313 33314 33315 33316 33317 33318 33319 33320 33321 33322 33323 33324 33325 33326 33327 33328 33329 33330 33331 33332 33333 33334 33335 33336 33337 33338 33339 33340 33341 33342 33343 33344 33345 33346 33347 33348 33349 33350 33351 33352 33353 33354 33355 33356 33357 33358 33359 33360 33361 33362 33363 33364 33365 33366 33367 33368 33369 33370 33371 33372 33373 33374 33375 33376 33377 33378 33379 33380 33381 33382 33383 33384 33385 33386 33387 33388 33389 33390 33391 33392 33393 33394 33395 33396 33397 33398 33399 33400 33401 33402 33403 33404 33405 33406 33407 33408 33409 33410 33411 33412 33413 33414 33415 33416 33417 33418 33419 33420 33421 33422 33423 33424 33425 33426 33427 33428 33429 33430 33431 33432 33433 33434 33435 33436 33437 33438 33439 33440 33441 33442 33443 33444 33445 33446 33447 33448 33449 33450 33451 33452 33453 33454 33455 33456 33457 33458 33459 33460 33461 33462 33463 33464 33465 33466 33467 33468 33469 33470 33471 33472 33473 33474 33475 33476 33477 33478 33479 33480 33481 33482 33483 33484 33485 33486 33487 33488 33489 33490 33491 33492 33493 33494 33495 33496 33497 33498 33499 33500 33501 33502 33503 33504 33505 33506 33507 33508 33509 33510 33511 33512 33513 33514 33515 33516 33517 33518 33519 33520 33521 33522 33523 33524 33525 33526 33527 33528 33529 33530 33531 33532 33533 33534 33535 33536 33537 33538 33539 33540 33541 33542 33543 33544 33545 33546 33547 33548 33549 33550 33551 33552 33553 33554 33555 33556 33557 33558 33559 33560 33561 33562 33563 33564 33565 33566 33567 33568 33569 33570 33571 33572 33573 33574 33575 33576 33577 33578 33579 33580 33581 33582 33583 33584 33585 33586 33587 33588 33589 33590 33591 33592 33593 33594 33595 33596 33597 33598 33599 33600 33601 33602 33603 33604 33605 33606 33607 33608 33609 33610 33611 33612 33613 33614 33615 33616 33617 33618 33619 33620 33621 33622 33623 33624 33625 33626 33627 33628 33629 33630 33631 33632 33633 33634 33635 33636 33637 33638 33639 33640 33641 33642 33643 33644 33645 33646 33647 33648 33649 33650 33651 33652 33653 33654 33655 33656 33657 33658 33659 33660 33661 33662 33663 33664 33665 33666 33667 33668 33669 33670 33671 33672 33673 33674 33675 33676 33677 33678 33679 33680 33681 33682 33683 33684 33685 33686 33687 33688 33689 33690 33691 33692 33693 33694 33695 33696 33697 33698 33699 33700 33701 33702 33703 33704 33705 33706 33707 33708 33709 33710 33711 33712 33713 33714 33715 33716 33717 33718 33719 33720 33721 33722 33723 33724 33725 33726 33727 33728 33729 33730 33731 33732 33733 33734 33735 33736 33737 33738 33739 33740 33741 33742 33743 33744 33745 33746 33747 33748 33749 33750 33751 33752 33753 33754 33755 33756 33757 33758 33759 33760 33761 33762 33763 33764 33765 33766 33767 33768 33769 33770 33771 33772 33773 33774 33775 33776 33777 33778 33779 33780 33781 33782 33783 33784 33785 33786 33787 33788 33789 33790 33791 33792 33793 33794 33795 33796 33797 33798 33799 33800 33801 33802 33803 33804 33805 33806 33807 33808 33809 33810 33811 33812 33813 33814 33815 33816 33817 33818 33819 33820 33821 33822 33823 33824 33825 33826 33827 33828 33829 33830 33831 33832 33833 33834 33835 33836 33837 33838 33839 33840 33841 33842 33843 33844 33845 33846 33847 33848 33849 33850 33851 33852 33853 33854 33855 33856 33857 33858 33859 33860 33861 33862 33863 33864 33865 33866 33867 33868 33869 33870 33871 33872 33873 33874 33875 33876 33877 33878 33879 33880 33881 33882 33883 33884 33885 33886 33887 33888 33889 33890 33891 33892 33893 33894 33895 33896 33897 33898 33899 33900 33901 33902 33903 33904 33905 33906 33907 33908 33909 33910 33911 33912 33913 33914 33915 33916 33917 33918 33919 33920 33921 33922 33923 33924 33925 33926 33927 33928 33929 33930 33931 33932 33933 33934 33935 33936 33937 33938 33939 33940 33941 33942 33943 33944 33945 33946 33947 33948 33949 33950 33951 33952 33953 33954 33955 33956 33957 33958 33959 33960 33961 33962 33963 33964 33965 33966 33967 33968 33969 33970 33971 33972 33973 33974 33975 33976 33977 33978 33979 33980 33981 33982 33983 33984 33985 33986 33987 33988 33989 33990 33991 33992 33993 33994 33995 33996 33997 33998 33999 34000 34001 34002 34003 34004 34005 34006 34007 34008 34009 34010 34011 34012 34013 34014 34015 34016 34017 34018 34019 34020 34021 34022 34023 34024 34025 34026 34027 34028 34029 34030 34031 34032 34033 34034 34035 34036 34037 34038 34039 34040 34041 34042 34043 34044 34045 34046 34047 34048 34049 34050 34051 34052 34053 34054 34055 34056 34057 34058 34059 34060 34061 34062 34063 34064 34065 34066 34067 34068 34069 34070 34071 34072 34073 34074 34075 34076 34077 34078 34079 34080 34081 34082 34083 34084 34085 34086 34087 34088 34089 34090 34091 34092 34093 34094 34095 34096 34097 34098 34099 34100 34101 34102 34103 34104 34105 34106 34107 34108 34109 34110 34111 34112 34113 34114 34115 34116 34117 34118 34119 34120 34121 34122 34123 34124 34125 34126 34127 34128 34129 34130 34131 34132 34133 34134 34135 34136 34137 34138 34139 34140 34141 34142 34143 34144 34145 34146 34147 34148 34149 34150 34151 34152 34153 34154 34155 34156 34157 34158 34159 34160 34161 34162 34163 34164 34165 34166 34167 34168 34169 34170 34171 34172 34173 34174 34175 34176 34177 34178 34179 34180 34181 34182 34183 34184 34185 34186 34187 34188 34189 34190 34191 34192 34193 34194 34195 34196 34197 34198 34199 34200 34201 34202 34203 34204 34205 34206 34207 34208 34209 34210 34211 34212 34213 34214 34215 34216 34217 34218 34219 34220 34221 34222 34223 34224 34225 34226 34227 34228 34229 34230 34231 34232 34233 34234 34235 34236 34237 34238 34239 34240 34241 34242 34243 34244 34245 34246 34247 34248 34249 34250 34251 34252 34253 34254 34255 34256 34257 34258 34259 34260 34261 34262 34263 34264 34265 34266 34267 34268 34269 34270 34271 34272 34273 34274 34275 34276 34277 34278 34279 34280 34281 34282 34283 34284 34285 34286 34287 34288 34289 34290 34291 34292 34293 34294 34295 34296 34297 34298 34299 34300 34301 34302 34303 34304 34305 34306 34307 34308 34309 34310 34311 34312 34313 34314 34315 34316 34317 34318 34319 34320 34321 34322 34323 34324 34325 34326 34327 34328 34329 34330 34331 34332 34333 34334 34335 34336 34337 34338 34339 34340 34341 34342 34343 34344 34345 34346 34347 34348 34349 34350 34351 34352 34353 34354 34355 34356 34357 34358 34359 34360 34361 34362 34363 34364 34365 34366 34367 34368 34369 34370 34371 34372 34373 34374 34375 34376 34377 34378 34379 34380 34381 34382 34383 34384 34385 34386 34387 34388 34389 34390 34391 34392 34393 34394 34395 34396 34397 34398 34399 34400 34401 34402 34403 34404 34405 34406 34407 34408 34409 34410 34411 34412 34413 34414 34415 34416 34417 34418 34419 34420 34421 34422 34423 34424 34425 34426 34427 34428 34429 34430 34431 34432 34433 34434 34435 34436 34437 34438 34439 34440 34441 34442 34443 34444 34445 34446 34447 34448 34449 34450 34451 34452 34453 34454 34455 34456 34457 34458 34459 34460 34461 34462 34463 34464 34465 34466 34467 34468 34469 34470 34471 34472 34473 34474 34475 34476 34477 34478 34479 34480 34481 34482 34483 34484 34485 34486 34487 34488 34489 34490 34491 34492 34493 34494 34495 34496 34497 34498 34499 34500 34501 34502 34503 34504 34505 34506 34507 34508 34509 34510 34511 34512 34513 34514 34515 34516 34517 34518 34519 34520 34521 34522 34523 34524 34525 34526 34527 34528 34529 34530 34531 34532 34533 34534 34535 34536 34537 34538 34539 34540 34541 34542 34543 34544 34545 34546 34547 34548 34549 34550 34551 34552 34553 34554 34555 34556 34557 34558 34559 34560 34561 34562 34563 34564 34565 34566 34567 34568 34569 34570 34571 34572 34573 34574 34575 34576 34577 34578 34579 34580 34581 34582 34583 34584 34585 34586 34587 34588 34589 34590 34591 34592 34593 34594 34595 34596 34597 34598 34599 34600 34601 34602 34603 34604 34605 34606 34607 34608 34609 34610 34611 34612 34613 34614 34615 34616 34617 34618 34619 34620 34621 34622 34623 34624 34625 34626 34627 34628 34629 34630 34631 34632 34633 34634 34635 34636 34637 34638 34639 34640 34641 34642 34643 34644 34645 34646 34647 34648 34649 34650 34651 34652 34653 34654 34655 34656 34657 34658 34659 34660 34661 34662 34663 34664 34665 34666 34667 34668 34669 34670 34671 34672 34673 34674 34675 34676 34677 34678 34679 34680 34681 34682 34683 34684 34685 34686 34687 34688 34689 34690 34691 34692 34693 34694 34695 34696 34697 34698 34699 34700 34701 34702 34703 34704 34705 34706 34707 34708 34709 34710 34711 34712 34713 34714 34715 34716 34717 34718 34719 34720 34721 34722 34723 34724 34725 34726 34727 34728 34729 34730 34731 34732 34733 34734 34735 34736 34737 34738 34739 34740 34741 34742 34743 34744 34745 34746 34747 34748 34749 34750 34751 34752 34753 34754 34755 34756 34757 34758 34759 34760 34761 34762 34763 34764 34765 34766 34767 34768 34769 34770 34771 34772 34773 34774 34775 34776 34777 34778 34779 34780 34781 34782 34783 34784 34785 34786 34787 34788 34789 34790 34791 34792 34793 34794 34795 34796 34797 34798 34799 34800 34801 34802 34803 34804 34805 34806 34807 34808 34809 34810 34811 34812 34813 34814 34815 34816 34817 34818 34819 34820 34821 34822 34823 34824 34825 34826 34827 34828 34829 34830 34831 34832 34833 34834 34835 34836 34837 34838 34839 34840 34841 34842 34843 34844 34845 34846 34847 34848 34849 34850 34851 34852 34853 34854 34855 34856 34857 34858 34859 34860 34861 34862 34863 34864 34865 34866 34867 34868 34869 34870 34871 34872 34873 34874 34875 34876 34877 34878 34879 34880 34881 34882 34883 34884 34885 34886 34887 34888 34889 34890 34891 34892 34893 34894 34895 34896 34897 34898 34899 34900 34901 34902 34903 34904 34905 34906 34907 34908 34909 34910 34911 34912 34913 34914 34915 34916 34917 34918 34919 34920 34921 34922 34923 34924 34925 34926 34927 34928 34929 34930 34931 34932 34933 34934 34935 34936 34937 34938 34939 34940 34941 34942 34943 34944 34945 34946 34947 34948 34949 34950 34951 34952 34953 34954 34955 34956 34957 34958 34959 34960 34961 34962 34963 34964 34965 34966 34967 34968 34969 34970 34971 34972 34973 34974 34975 34976 34977 34978 34979 34980 34981 34982 34983 34984 34985 34986 34987 34988 34989 34990 34991 34992 34993 34994 34995 34996 34997 34998 34999 35000 35001 35002 35003 35004 35005 35006 35007 35008 35009 35010 35011 35012 35013 35014 35015 35016 35017 35018 35019 35020 35021 35022 35023 35024 35025 35026 35027 35028 35029 35030 35031 35032 35033 35034 35035 35036 35037 35038 35039 35040 35041 35042 35043 35044 35045 35046 35047 35048 35049 35050 35051 35052 35053 35054 35055 35056 35057 35058 35059 35060 35061 35062 35063 35064 35065 35066 35067 35068 35069 35070 35071 35072 35073 35074 35075 35076 35077 35078 35079 35080 35081 35082 35083 35084 35085 35086 35087 35088 35089 35090 35091 35092 35093 35094 35095 35096 35097 35098 35099 35100 35101 35102 35103 35104 35105 35106 35107 35108 35109 35110 35111 35112 35113 35114 35115 35116 35117 35118 35119 35120 35121 35122 35123 35124 35125 35126 35127 35128 35129 35130 35131 35132 35133 35134 35135 35136 35137 35138 35139 35140 35141 35142 35143 35144 35145 35146 35147 35148 35149 35150 35151 35152 35153 35154 35155 35156 35157 35158 35159 35160 35161 35162 35163 35164 35165 35166 35167 35168 35169 35170 35171 35172 35173 35174 35175 35176 35177 35178 35179 35180 35181 35182 35183 35184 35185 35186 35187 35188 35189 35190 35191 35192 35193 35194 35195 35196 35197 35198 35199 35200 35201 35202 35203 35204 35205 35206 35207 35208 35209 35210 35211 35212 35213 35214 35215 35216 35217 35218 35219 35220 35221 35222 35223 35224 35225 35226 35227 35228 35229 35230 35231 35232 35233 35234 35235 35236 35237 35238 35239 35240 35241 35242 35243 35244 35245 35246 35247 35248 35249 35250 35251 35252 35253 35254 35255 35256 35257 35258 35259 35260 35261 35262 35263 35264 35265 35266 35267 35268 35269 35270 35271 35272 35273 35274 35275 35276 35277 35278 35279 35280 35281 35282 35283 35284 35285 35286 35287 35288 35289 35290 35291 35292 35293 35294 35295 35296 35297 35298 35299 35300 35301 35302 35303 35304 35305 35306 35307 35308 35309 35310 35311 35312 35313 35314 35315 35316 35317 35318 35319 35320 35321 35322 35323 35324 35325 35326 35327 35328 35329 35330 35331 35332 35333 35334 35335 35336 35337 35338 35339 35340 35341 35342 35343 35344 35345 35346 35347 35348 35349 35350 35351 35352 35353 35354 35355 35356 35357 35358 35359 35360 35361 35362 35363 35364 35365 35366 35367 35368 35369 35370 35371 35372 35373 35374 35375 35376 35377 35378 35379 35380 35381 35382 35383 35384 35385 35386 35387 35388 35389 35390 35391 35392 35393 35394 35395 35396 35397 35398 35399 35400 35401 35402 35403 35404 35405 35406 35407 35408 35409 35410 35411 35412 35413 35414 35415 35416 35417 35418 35419 35420 35421 35422 35423 35424 35425 35426 35427 35428 35429 35430 35431 35432 35433 35434 35435 35436 35437 35438 35439 35440 35441 35442 35443 35444 35445 35446 35447 35448 35449 35450 35451 35452 35453 35454 35455 35456 35457 35458 35459 35460 35461 35462 35463 35464 35465 35466 35467 35468 35469 35470 35471 35472 35473 35474 35475 35476 35477 35478 35479 35480 35481 35482 35483 35484 35485 35486 35487 35488 35489 35490 35491 35492 35493 35494 35495 35496 35497 35498 35499 35500 35501 35502 35503 35504 35505 35506 35507 35508 35509 35510 35511 35512 35513 35514 35515 35516 35517 35518 35519 35520 35521 35522 35523 35524 35525 35526 35527 35528 35529 35530 35531 35532 35533 35534 35535 35536 35537 35538 35539 35540 35541 35542 35543 35544 35545 35546 35547 35548 35549 35550 35551 35552 35553 35554 35555 35556 35557 35558 35559 35560 35561 35562 35563 35564 35565 35566 35567 35568 35569 35570 35571 35572 35573 35574 35575 35576 35577 35578 35579 35580 35581 35582 35583 35584 35585 35586 35587 35588 35589 35590 35591 35592 35593 35594 35595 35596 35597 35598 35599 35600 35601 35602 35603 35604 35605 35606 35607 35608 35609 35610 35611 35612 35613 35614 35615 35616 35617 35618 35619 35620 35621 35622 35623 35624 35625 35626 35627 35628 35629 35630 35631 35632 35633 35634 35635 35636 35637 35638 35639 35640 35641 35642 35643 35644 35645 35646 35647 35648 35649 35650 35651 35652 35653 35654 35655 35656 35657 35658 35659 35660 35661 35662 35663 35664 35665 35666 35667 35668 35669 35670 35671 35672 35673 35674 35675 35676 35677 35678 35679 35680 35681 35682 35683 35684 35685 35686 35687 35688 35689 35690 35691 35692 35693 35694 35695 35696 35697 35698 35699 35700 35701 35702 35703 35704 35705 35706 35707 35708 35709 35710 35711 35712 35713 35714 35715 35716 35717 35718 35719 35720 35721 35722 35723 35724 35725 35726 35727 35728 35729 35730 35731 35732 35733 35734 35735 35736 35737 35738 35739 35740 35741 35742 35743 35744 35745 35746 35747 35748 35749 35750 35751 35752 35753 35754 35755 35756 35757 35758 35759 35760 35761 35762 35763 35764 35765 35766 35767 35768 35769 35770 35771 35772 35773 35774 35775 35776 35777 35778 35779 35780 35781 35782 35783 35784 35785 35786 35787 35788 35789 35790 35791 35792 35793 35794 35795 35796 35797 35798 35799 35800 35801 35802 35803 35804 35805 35806 35807 35808 35809 35810 35811 35812 35813 35814 35815 35816 35817 35818 35819 35820 35821 35822 35823 35824 35825 35826 35827 35828 35829 35830 35831 35832 35833 35834 35835 35836 35837 35838 35839 35840 35841 35842 35843 35844 35845 35846 35847 35848 35849 35850 35851 35852 35853 35854 35855 35856 35857 35858 35859 35860 35861 35862 35863 35864 35865 35866 35867 35868 35869 35870 35871 35872 35873 35874 35875 35876 35877 35878 35879 35880 35881 35882 35883 35884 35885 35886 35887 35888 35889 35890 35891 35892 35893 35894 35895 35896 35897 35898 35899 35900 35901
This is gawk.info, produced by makeinfo version 6.1 from gawk.texi.

Copyright (C) 1989, 1991, 1992, 1993, 1996-2005, 2007, 2009-2016
Free Software Foundation, Inc.


   This is Edition 4.1 of 'GAWK: Effective AWK Programming: A User's
Guide for GNU Awk', for the 4.1.4 (or later) version of the GNU
implementation of AWK.

   Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.3 or
any later version published by the Free Software Foundation; with the
Invariant Sections being "GNU General Public License", with the
Front-Cover Texts being "A GNU Manual", and with the Back-Cover Texts as
in (a) below.  A copy of the license is included in the section entitled
"GNU Free Documentation License".

  a. The FSF's Back-Cover Text is: "You have the freedom to copy and
     modify this GNU manual."
INFO-DIR-SECTION Text creation and manipulation
START-INFO-DIR-ENTRY
* Gawk: (gawk).                 A text scanning and processing language.
END-INFO-DIR-ENTRY

INFO-DIR-SECTION Individual utilities
START-INFO-DIR-ENTRY
* awk: (gawk)Invoking gawk.                     Text scanning and processing.
END-INFO-DIR-ENTRY


File: gawk.info,  Node: Top,  Next: Foreword3,  Up: (dir)

General Introduction
********************

This file documents 'awk', a program that you can use to select
particular records in a file and perform operations upon them.

   Copyright (C) 1989, 1991, 1992, 1993, 1996-2005, 2007, 2009-2016
Free Software Foundation, Inc.


   This is Edition 4.1 of 'GAWK: Effective AWK Programming: A User's
Guide for GNU Awk', for the 4.1.4 (or later) version of the GNU
implementation of AWK.

   Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.3 or
any later version published by the Free Software Foundation; with the
Invariant Sections being "GNU General Public License", with the
Front-Cover Texts being "A GNU Manual", and with the Back-Cover Texts as
in (a) below.  A copy of the license is included in the section entitled
"GNU Free Documentation License".

  a. The FSF's Back-Cover Text is: "You have the freedom to copy and
     modify this GNU manual."

* Menu:

* Foreword3::                      Some nice words about this
                                   Info file.
* Foreword4::                      More nice words.
* Preface::                        What this Info file is about; brief
                                   history and acknowledgments.
* Getting Started::                A basic introduction to using
                                   'awk'. How to run an 'awk'
                                   program. Command-line syntax.
* Invoking Gawk::                  How to run 'gawk'.
* Regexp::                         All about matching things using regular
                                   expressions.
* Reading Files::                  How to read files and manipulate fields.
* Printing::                       How to print using 'awk'. Describes
                                   the 'print' and 'printf'
                                   statements. Also describes redirection of
                                   output.
* Expressions::                    Expressions are the basic building blocks
                                   of statements.
* Patterns and Actions::           Overviews of patterns and actions.
* Arrays::                         The description and use of arrays. Also
                                   includes array-oriented control statements.
* Functions::                      Built-in and user-defined functions.
* Library Functions::              A Library of 'awk' Functions.
* Sample Programs::                Many 'awk' programs with complete
                                   explanations.
* Advanced Features::              Stuff for advanced users, specific to
                                   'gawk'.
* Internationalization::           Getting 'gawk' to speak your
                                   language.
* Debugger::                       The 'gawk' debugger.
* Arbitrary Precision Arithmetic:: Arbitrary precision arithmetic with
                                   'gawk'.
* Dynamic Extensions::             Adding new built-in functions to
                                   'gawk'.
* Language History::               The evolution of the 'awk'
                                   language.
* Installation::                   Installing 'gawk' under various
                                   operating systems.
* Notes::                          Notes about adding things to 'gawk'
                                   and possible future work.
* Basic Concepts::                 A very quick introduction to programming
                                   concepts.
* Glossary::                       An explanation of some unfamiliar terms.
* Copying::                        Your right to copy and distribute
                                   'gawk'.
* GNU Free Documentation License:: The license for this Info file.
* Index::                          Concept and Variable Index.

* History::                             The history of 'gawk' and
                                        'awk'.
* Names::                               What name to use to find
                                        'awk'.
* This Manual::                         Using this Info file. Includes
                                        sample input files that you can use.
* Conventions::                         Typographical Conventions.
* Manual History::                      Brief history of the GNU project and
                                        this Info file.
* How To Contribute::                   Helping to save the world.
* Acknowledgments::                     Acknowledgments.
* Running gawk::                        How to run 'gawk' programs;
                                        includes command-line syntax.
* One-shot::                            Running a short throwaway
                                        'awk' program.
* Read Terminal::                       Using no input files (input from the
                                        keyboard instead).
* Long::                                Putting permanent 'awk'
                                        programs in files.
* Executable Scripts::                  Making self-contained 'awk'
                                        programs.
* Comments::                            Adding documentation to 'gawk'
                                        programs.
* Quoting::                             More discussion of shell quoting
                                        issues.
* DOS Quoting::                         Quoting in Windows Batch Files.
* Sample Data Files::                   Sample data files for use in the
                                        'awk' programs illustrated in
                                        this Info file.
* Very Simple::                         A very simple example.
* Two Rules::                           A less simple one-line example using
                                        two rules.
* More Complex::                        A more complex example.
* Statements/Lines::                    Subdividing or combining statements
                                        into lines.
* Other Features::                      Other Features of 'awk'.
* When::                                When to use 'gawk' and when to
                                        use other things.
* Intro Summary::                       Summary of the introduction.
* Command Line::                        How to run 'awk'.
* Options::                             Command-line options and their
                                        meanings.
* Other Arguments::                     Input file names and variable
                                        assignments.
* Naming Standard Input::               How to specify standard input with
                                        other files.
* Environment Variables::               The environment variables
                                        'gawk' uses.
* AWKPATH Variable::                    Searching directories for
                                        'awk' programs.
* AWKLIBPATH Variable::                 Searching directories for
                                        'awk' shared libraries.
* Other Environment Variables::         The environment variables.
* Exit Status::                         'gawk''s exit status.
* Include Files::                       Including other files into your
                                        program.
* Loading Shared Libraries::            Loading shared libraries into your
                                        program.
* Obsolete::                            Obsolete Options and/or features.
* Undocumented::                        Undocumented Options and Features.
* Invoking Summary::                    Invocation summary.
* Regexp Usage::                        How to Use Regular Expressions.
* Escape Sequences::                    How to write nonprinting characters.
* Regexp Operators::                    Regular Expression Operators.
* Bracket Expressions::                 What can go between '[...]'.
* Leftmost Longest::                    How much text matches.
* Computed Regexps::                    Using Dynamic Regexps.
* GNU Regexp Operators::                Operators specific to GNU software.
* Case-sensitivity::                    How to do case-insensitive matching.
* Strong Regexp Constants::             Strongly typed regexp constants.
* Regexp Summary::                      Regular expressions summary.
* Records::                             Controlling how data is split into
                                        records.
* awk split records::                   How standard 'awk' splits
                                        records.
* gawk split records::                  How 'gawk' splits records.
* Fields::                              An introduction to fields.
* Nonconstant Fields::                  Nonconstant Field Numbers.
* Changing Fields::                     Changing the Contents of a Field.
* Field Separators::                    The field separator and how to change
                                        it.
* Default Field Splitting::             How fields are normally separated.
* Regexp Field Splitting::              Using regexps as the field separator.
* Single Character Fields::             Making each character a separate
                                        field.
* Command Line Field Separator::        Setting 'FS' from the command
                                        line.
* Full Line Fields::                    Making the full line be a single
                                        field.
* Field Splitting Summary::             Some final points and a summary table.
* Constant Size::                       Reading constant width data.
* Splitting By Content::                Defining Fields By Content
* Multiple Line::                       Reading multiline records.
* Getline::                             Reading files under explicit program
                                        control using the 'getline'
                                        function.
* Plain Getline::                       Using 'getline' with no
                                        arguments.
* Getline/Variable::                    Using 'getline' into a variable.
* Getline/File::                        Using 'getline' from a file.
* Getline/Variable/File::               Using 'getline' into a variable
                                        from a file.
* Getline/Pipe::                        Using 'getline' from a pipe.
* Getline/Variable/Pipe::               Using 'getline' into a variable
                                        from a pipe.
* Getline/Coprocess::                   Using 'getline' from a coprocess.
* Getline/Variable/Coprocess::          Using 'getline' into a variable
                                        from a coprocess.
* Getline Notes::                       Important things to know about
                                        'getline'.
* Getline Summary::                     Summary of 'getline' Variants.
* Read Timeout::                        Reading input with a timeout.
* Retrying Input::                      Retrying input after certain errors.
* Command-line directories::            What happens if you put a directory on
                                        the command line.
* Input Summary::                       Input summary.
* Input Exercises::                     Exercises.
* Print::                               The 'print' statement.
* Print Examples::                      Simple examples of 'print'
                                        statements.
* Output Separators::                   The output separators and how to
                                        change them.
* OFMT::                                Controlling Numeric Output With
                                        'print'.
* Printf::                              The 'printf' statement.
* Basic Printf::                        Syntax of the 'printf' statement.
* Control Letters::                     Format-control letters.
* Format Modifiers::                    Format-specification modifiers.
* Printf Examples::                     Several examples.
* Redirection::                         How to redirect output to multiple
                                        files and pipes.
* Special FD::                          Special files for I/O.
* Special Files::                       File name interpretation in
                                        'gawk'. 'gawk' allows
                                        access to inherited file descriptors.
* Other Inherited Files::               Accessing other open files with
                                        'gawk'.
* Special Network::                     Special files for network
                                        communications.
* Special Caveats::                     Things to watch out for.
* Close Files And Pipes::               Closing Input and Output Files and
                                        Pipes.
* Nonfatal::                            Enabling Nonfatal Output.
* Output Summary::                      Output summary.
* Output Exercises::                    Exercises.
* Values::                              Constants, Variables, and Regular
                                        Expressions.
* Constants::                           String, numeric and regexp constants.
* Scalar Constants::                    Numeric and string constants.
* Nondecimal-numbers::                  What are octal and hex numbers.
* Regexp Constants::                    Regular Expression constants.
* Using Constant Regexps::              When and how to use a regexp constant.
* Variables::                           Variables give names to values for
                                        later use.
* Using Variables::                     Using variables in your programs.
* Assignment Options::                  Setting variables on the command line
                                        and a summary of command-line syntax.
                                        This is an advanced method of input.
* Conversion::                          The conversion of strings to numbers
                                        and vice versa.
* Strings And Numbers::                 How 'awk' Converts Between
                                        Strings And Numbers.
* Locale influences conversions::       How the locale may affect conversions.
* All Operators::                       'gawk''s operators.
* Arithmetic Ops::                      Arithmetic operations ('+',
                                        '-', etc.)
* Concatenation::                       Concatenating strings.
* Assignment Ops::                      Changing the value of a variable or a
                                        field.
* Increment Ops::                       Incrementing the numeric value of a
                                        variable.
* Truth Values and Conditions::         Testing for true and false.
* Truth Values::                        What is "true" and what is
                                        "false".
* Typing and Comparison::               How variables acquire types and how
                                        this affects comparison of numbers and
                                        strings with '<', etc.
* Variable Typing::                     String type versus numeric type.
* Comparison Operators::                The comparison operators.
* POSIX String Comparison::             String comparison with POSIX rules.
* Boolean Ops::                         Combining comparison expressions using
                                        boolean operators '||' ("or"),
                                        '&&' ("and") and '!'
                                        ("not").
* Conditional Exp::                     Conditional expressions select between
                                        two subexpressions under control of a
                                        third subexpression.
* Function Calls::                      A function call is an expression.
* Precedence::                          How various operators nest.
* Locales::                             How the locale affects things.
* Expressions Summary::                 Expressions summary.
* Pattern Overview::                    What goes into a pattern.
* Regexp Patterns::                     Using regexps as patterns.
* Expression Patterns::                 Any expression can be used as a
                                        pattern.
* Ranges::                              Pairs of patterns specify record
                                        ranges.
* BEGIN/END::                           Specifying initialization and cleanup
                                        rules.
* Using BEGIN/END::                     How and why to use BEGIN/END rules.
* I/O And BEGIN/END::                   I/O issues in BEGIN/END rules.
* BEGINFILE/ENDFILE::                   Two special patterns for advanced
                                        control.
* Empty::                               The empty pattern, which matches every
                                        record.
* Using Shell Variables::               How to use shell variables with
                                        'awk'.
* Action Overview::                     What goes into an action.
* Statements::                          Describes the various control
                                        statements in detail.
* If Statement::                        Conditionally execute some
                                        'awk' statements.
* While Statement::                     Loop until some condition is
                                        satisfied.
* Do Statement::                        Do specified action while looping
                                        until some condition is satisfied.
* For Statement::                       Another looping statement, that
                                        provides initialization and increment
                                        clauses.
* Switch Statement::                    Switch/case evaluation for conditional
                                        execution of statements based on a
                                        value.
* Break Statement::                     Immediately exit the innermost
                                        enclosing loop.
* Continue Statement::                  Skip to the end of the innermost
                                        enclosing loop.
* Next Statement::                      Stop processing the current input
                                        record.
* Nextfile Statement::                  Stop processing the current file.
* Exit Statement::                      Stop execution of 'awk'.
* Built-in Variables::                  Summarizes the predefined variables.
* User-modified::                       Built-in variables that you change to
                                        control 'awk'.
* Auto-set::                            Built-in variables where 'awk'
                                        gives you information.
* ARGC and ARGV::                       Ways to use 'ARGC' and
                                        'ARGV'.
* Pattern Action Summary::              Patterns and Actions summary.
* Array Basics::                        The basics of arrays.
* Array Intro::                         Introduction to Arrays
* Reference to Elements::               How to examine one element of an
                                        array.
* Assigning Elements::                  How to change an element of an array.
* Array Example::                       Basic Example of an Array
* Scanning an Array::                   A variation of the 'for'
                                        statement. It loops through the
                                        indices of an array's existing
                                        elements.
* Controlling Scanning::                Controlling the order in which arrays
                                        are scanned.
* Numeric Array Subscripts::            How to use numbers as subscripts in
                                        'awk'.
* Uninitialized Subscripts::            Using Uninitialized variables as
                                        subscripts.
* Delete::                              The 'delete' statement removes an
                                        element from an array.
* Multidimensional::                    Emulating multidimensional arrays in
                                        'awk'.
* Multiscanning::                       Scanning multidimensional arrays.
* Arrays of Arrays::                    True multidimensional arrays.
* Arrays Summary::                      Summary of arrays.
* Built-in::                            Summarizes the built-in functions.
* Calling Built-in::                    How to call built-in functions.
* Numeric Functions::                   Functions that work with numbers,
                                        including 'int()', 'sin()'
                                        and 'rand()'.
* String Functions::                    Functions for string manipulation,
                                        such as 'split()', 'match()'
                                        and 'sprintf()'.
* Gory Details::                        More than you want to know about
                                        '\' and '&' with
                                        'sub()', 'gsub()', and
                                        'gensub()'.
* I/O Functions::                       Functions for files and shell
                                        commands.
* Time Functions::                      Functions for dealing with timestamps.
* Bitwise Functions::                   Functions for bitwise operations.
* Type Functions::                      Functions for type information.
* I18N Functions::                      Functions for string translation.
* User-defined::                        Describes User-defined functions in
                                        detail.
* Definition Syntax::                   How to write definitions and what they
                                        mean.
* Function Example::                    An example function definition and
                                        what it does.
* Function Caveats::                    Things to watch out for.
* Calling A Function::                  Don't use spaces.
* Variable Scope::                      Controlling variable scope.
* Pass By Value/Reference::             Passing parameters.
* Return Statement::                    Specifying the value a function
                                        returns.
* Dynamic Typing::                      How variable types can change at
                                        runtime.
* Indirect Calls::                      Choosing the function to call at
                                        runtime.
* Functions Summary::                   Summary of functions.
* Library Names::                       How to best name private global
                                        variables in library functions.
* General Functions::                   Functions that are of general use.
* Strtonum Function::                   A replacement for the built-in
                                        'strtonum()' function.
* Assert Function::                     A function for assertions in
                                        'awk' programs.
* Round Function::                      A function for rounding if
                                        'sprintf()' does not do it
                                        correctly.
* Cliff Random Function::               The Cliff Random Number Generator.
* Ordinal Functions::                   Functions for using characters as
                                        numbers and vice versa.
* Join Function::                       A function to join an array into a
                                        string.
* Getlocaltime Function::               A function to get formatted times.
* Readfile Function::                   A function to read an entire file at
                                        once.
* Shell Quoting::                       A function to quote strings for the
                                        shell.
* Data File Management::                Functions for managing command-line
                                        data files.
* Filetrans Function::                  A function for handling data file
                                        transitions.
* Rewind Function::                     A function for rereading the current
                                        file.
* File Checking::                       Checking that data files are readable.
* Empty Files::                         Checking for zero-length files.
* Ignoring Assigns::                    Treating assignments as file names.
* Getopt Function::                     A function for processing command-line
                                        arguments.
* Passwd Functions::                    Functions for getting user
                                        information.
* Group Functions::                     Functions for getting group
                                        information.
* Walking Arrays::                      A function to walk arrays of arrays.
* Library Functions Summary::           Summary of library functions.
* Library Exercises::                   Exercises.
* Running Examples::                    How to run these examples.
* Clones::                              Clones of common utilities.
* Cut Program::                         The 'cut' utility.
* Egrep Program::                       The 'egrep' utility.
* Id Program::                          The 'id' utility.
* Split Program::                       The 'split' utility.
* Tee Program::                         The 'tee' utility.
* Uniq Program::                        The 'uniq' utility.
* Wc Program::                          The 'wc' utility.
* Miscellaneous Programs::              Some interesting 'awk'
                                        programs.
* Dupword Program::                     Finding duplicated words in a
                                        document.
* Alarm Program::                       An alarm clock.
* Translate Program::                   A program similar to the 'tr'
                                        utility.
* Labels Program::                      Printing mailing labels.
* Word Sorting::                        A program to produce a word usage
                                        count.
* History Sorting::                     Eliminating duplicate entries from a
                                        history file.
* Extract Program::                     Pulling out programs from Texinfo
                                        source files.
* Simple Sed::                          A Simple Stream Editor.
* Igawk Program::                       A wrapper for 'awk' that
                                        includes files.
* Anagram Program::                     Finding anagrams from a dictionary.
* Signature Program::                   People do amazing things with too much
                                        time on their hands.
* Programs Summary::                    Summary of programs.
* Programs Exercises::                  Exercises.
* Nondecimal Data::                     Allowing nondecimal input data.
* Array Sorting::                       Facilities for controlling array
                                        traversal and sorting arrays.
* Controlling Array Traversal::         How to use PROCINFO["sorted_in"].
* Array Sorting Functions::             How to use 'asort()' and
                                        'asorti()'.
* Two-way I/O::                         Two-way communications with another
                                        process.
* TCP/IP Networking::                   Using 'gawk' for network
                                        programming.
* Profiling::                           Profiling your 'awk' programs.
* Advanced Features Summary::           Summary of advanced features.
* I18N and L10N::                       Internationalization and Localization.
* Explaining gettext::                  How GNU 'gettext' works.
* Programmer i18n::                     Features for the programmer.
* Translator i18n::                     Features for the translator.
* String Extraction::                   Extracting marked strings.
* Printf Ordering::                     Rearranging 'printf' arguments.
* I18N Portability::                    'awk'-level portability
                                        issues.
* I18N Example::                        A simple i18n example.
* Gawk I18N::                           'gawk' is also
                                        internationalized.
* I18N Summary::                        Summary of I18N stuff.
* Debugging::                           Introduction to 'gawk'
                                        debugger.
* Debugging Concepts::                  Debugging in General.
* Debugging Terms::                     Additional Debugging Concepts.
* Awk Debugging::                       Awk Debugging.
* Sample Debugging Session::            Sample debugging session.
* Debugger Invocation::                 How to Start the Debugger.
* Finding The Bug::                     Finding the Bug.
* List of Debugger Commands::           Main debugger commands.
* Breakpoint Control::                  Control of Breakpoints.
* Debugger Execution Control::          Control of Execution.
* Viewing And Changing Data::           Viewing and Changing Data.
* Execution Stack::                     Dealing with the Stack.
* Debugger Info::                       Obtaining Information about the
                                        Program and the Debugger State.
* Miscellaneous Debugger Commands::     Miscellaneous Commands.
* Readline Support::                    Readline support.
* Limitations::                         Limitations and future plans.
* Debugging Summary::                   Debugging summary.
* Computer Arithmetic::                 A quick intro to computer math.
* Math Definitions::                    Defining terms used.
* MPFR features::                       The MPFR features in 'gawk'.
* FP Math Caution::                     Things to know.
* Inexactness of computations::         Floating point math is not exact.
* Inexact representation::              Numbers are not exactly represented.
* Comparing FP Values::                 How to compare floating point values.
* Errors accumulate::                   Errors get bigger as they go.
* Getting Accuracy::                    Getting more accuracy takes some work.
* Try To Round::                        Add digits and round.
* Setting precision::                   How to set the precision.
* Setting the rounding mode::           How to set the rounding mode.
* Arbitrary Precision Integers::        Arbitrary Precision Integer Arithmetic
                                        with 'gawk'.
* POSIX Floating Point Problems::       Standards Versus Existing Practice.
* Floating point summary::              Summary of floating point discussion.
* Extension Intro::                     What is an extension.
* Plugin License::                      A note about licensing.
* Extension Mechanism Outline::         An outline of how it works.
* Extension API Description::           A full description of the API.
* Extension API Functions Introduction:: Introduction to the API functions.
* General Data Types::                  The data types.
* Memory Allocation Functions::         Functions for allocating memory.
* Constructor Functions::               Functions for creating values.
* Registration Functions::              Functions to register things with
                                        'gawk'.
* Extension Functions::                 Registering extension functions.
* Exit Callback Functions::             Registering an exit callback.
* Extension Version String::            Registering a version string.
* Input Parsers::                       Registering an input parser.
* Output Wrappers::                     Registering an output wrapper.
* Two-way processors::                  Registering a two-way processor.
* Printing Messages::                   Functions for printing messages.
* Updating ERRNO::               Functions for updating 'ERRNO'.
* Requesting Values::                   How to get a value.
* Accessing Parameters::                Functions for accessing parameters.
* Symbol Table Access::                 Functions for accessing global
                                        variables.
* Symbol table by name::                Accessing variables by name.
* Symbol table by cookie::              Accessing variables by "cookie".
* Cached values::                       Creating and using cached values.
* Array Manipulation::                  Functions for working with arrays.
* Array Data Types::                    Data types for working with arrays.
* Array Functions::                     Functions for working with arrays.
* Flattening Arrays::                   How to flatten arrays.
* Creating Arrays::                     How to create and populate arrays.
* Redirection API::                     How to access and manipulate redirections.
* Extension API Variables::             Variables provided by the API.
* Extension Versioning::                API Version information.
* Extension API Informational Variables:: Variables providing information about
                                        'gawk''s invocation.
* Extension API Boilerplate::           Boilerplate code for using the API.
* Finding Extensions::                  How 'gawk' finds compiled
                                        extensions.
* Extension Example::                   Example C code for an extension.
* Internal File Description::           What the new functions will do.
* Internal File Ops::                   The code for internal file operations.
* Using Internal File Ops::             How to use an external extension.
* Extension Samples::                   The sample extensions that ship with
                                        'gawk'.
* Extension Sample File Functions::     The file functions sample.
* Extension Sample Fnmatch::            An interface to 'fnmatch()'.
* Extension Sample Fork::               An interface to 'fork()' and
                                        other process functions.
* Extension Sample Inplace::            Enabling in-place file editing.
* Extension Sample Ord::                Character to value to character
                                        conversions.
* Extension Sample Readdir::            An interface to 'readdir()'.
* Extension Sample Revout::             Reversing output sample output
                                        wrapper.
* Extension Sample Rev2way::            Reversing data sample two-way
                                        processor.
* Extension Sample Read write array::   Serializing an array to a file.
* Extension Sample Readfile::           Reading an entire file into a string.
* Extension Sample Time::               An interface to 'gettimeofday()'
                                        and 'sleep()'.
* Extension Sample API Tests::          Tests for the API.
* gawkextlib::                          The 'gawkextlib' project.
* Extension summary::                   Extension summary.
* Extension Exercises::                 Exercises.
* V7/SVR3.1::                           The major changes between V7 and
                                        System V Release 3.1.
* SVR4::                                Minor changes between System V
                                        Releases 3.1 and 4.
* POSIX::                               New features from the POSIX standard.
* BTL::                                 New features from Brian Kernighan's
                                        version of 'awk'.
* POSIX/GNU::                           The extensions in 'gawk' not
                                        in POSIX 'awk'.
* Feature History::                     The history of the features in
                                        'gawk'.
* Common Extensions::                   Common Extensions Summary.
* Ranges and Locales::                  How locales used to affect regexp
                                        ranges.
* Contributors::                        The major contributors to
                                        'gawk'.
* History summary::                     History summary.
* Gawk Distribution::                   What is in the 'gawk'
                                        distribution.
* Getting::                             How to get the distribution.
* Extracting::                          How to extract the distribution.
* Distribution contents::               What is in the distribution.
* Unix Installation::                   Installing 'gawk' under
                                        various versions of Unix.
* Quick Installation::                  Compiling 'gawk' under Unix.
* Shell Startup Files::                 Shell convenience functions.
* Additional Configuration Options::    Other compile-time options.
* Configuration Philosophy::            How it's all supposed to work.
* Non-Unix Installation::               Installation on Other Operating
                                        Systems.
* PC Installation::                     Installing and Compiling
                                        'gawk' on MS-DOS and OS/2.
* PC Binary Installation::              Installing a prepared distribution.
* PC Compiling::                        Compiling 'gawk' for MS-DOS,
                                        Windows32, and OS/2.
* PC Testing::                          Testing 'gawk' on PC systems.
* PC Using::                            Running 'gawk' on MS-DOS,
                                        Windows32 and OS/2.
* Cygwin::                              Building and running 'gawk'
                                        for Cygwin.
* MSYS::                                Using 'gawk' In The MSYS
                                        Environment.
* VMS Installation::                    Installing 'gawk' on VMS.
* VMS Compilation::                     How to compile 'gawk' under
                                        VMS.
* VMS Dynamic Extensions::              Compiling 'gawk' dynamic
                                        extensions on VMS.
* VMS Installation Details::            How to install 'gawk' under
                                        VMS.
* VMS Running::                         How to run 'gawk' under VMS.
* VMS GNV::                             The VMS GNV Project.
* VMS Old Gawk::                        An old version comes with some VMS
                                        systems.
* Bugs::                                Reporting Problems and Bugs.
* Other Versions::                      Other freely available 'awk'
                                        implementations.
* Installation summary::                Summary of installation.
* Compatibility Mode::                  How to disable certain 'gawk'
                                        extensions.
* Additions::                           Making Additions To 'gawk'.
* Accessing The Source::                Accessing the Git repository.
* Adding Code::                         Adding code to the main body of
                                        'gawk'.
* New Ports::                           Porting 'gawk' to a new
                                        operating system.
* Derived Files::                       Why derived files are kept in the Git
                                        repository.
* Future Extensions::                   New features that may be implemented
                                        one day.
* Implementation Limitations::          Some limitations of the
                                        implementation.
* Extension Design::                    Design notes about the extension API.
* Old Extension Problems::              Problems with the old mechanism.
* Extension New Mechanism Goals::       Goals for the new mechanism.
* Extension Other Design Decisions::    Some other design decisions.
* Extension Future Growth::             Some room for future growth.
* Old Extension Mechanism::             Some compatibility for old extensions.
* Notes summary::                       Summary of implementation notes.
* Basic High Level::                    The high level view.
* Basic Data Typing::                   A very quick intro to data types.

   To my parents, for their love, and for the wonderful example they set
for me.

   To my wife Miriam, for making me complete.  Thank you for building
your life together with me.

   To our children Chana, Rivka, Nachum and Malka, for enrichening our
lives in innumerable ways.


File: gawk.info,  Node: Foreword3,  Next: Foreword4,  Prev: Top,  Up: Top

Foreword to the Third Edition
*****************************

Arnold Robbins and I are good friends.  We were introduced in 1990 by
circumstances--and our favorite programming language, AWK. The
circumstances started a couple of years earlier.  I was working at a new
job and noticed an unplugged Unix computer sitting in the corner.  No
one knew how to use it, and neither did I. However, a couple of days
later, it was running, and I was 'root' and the one-and-only user.  That
day, I began the transition from statistician to Unix programmer.

   On one of many trips to the library or bookstore in search of books
on Unix, I found the gray AWK book, a.k.a. Alfred V. Aho, Brian W.
Kernighan, and Peter J. Weinberger's 'The AWK Programming Language'
(Addison-Wesley, 1988).  'awk''s simple programming paradigm--find a
pattern in the input and then perform an action--often reduced complex
or tedious data manipulations to a few lines of code.  I was excited to
try my hand at programming in AWK.

   Alas, the 'awk' on my computer was a limited version of the language
described in the gray book.  I discovered that my computer had "old
'awk'" and the book described "new 'awk'."  I learned that this was
typical; the old version refused to step aside or relinquish its name.
If a system had a new 'awk', it was invariably called 'nawk', and few
systems had it.  The best way to get a new 'awk' was to 'ftp' the source
code for 'gawk' from 'prep.ai.mit.edu'.  'gawk' was a version of new
'awk' written by David Trueman and Arnold, and available under the GNU
General Public License.

   (Incidentally, it's no longer difficult to find a new 'awk'.  'gawk'
ships with GNU/Linux, and you can download binaries or source code for
almost any system; my wife uses 'gawk' on her VMS box.)

   My Unix system started out unplugged from the wall; it certainly was
not plugged into a network.  So, oblivious to the existence of 'gawk'
and the Unix community in general, and desiring a new 'awk', I wrote my
own, called 'mawk'.  Before I was finished, I knew about 'gawk', but it
was too late to stop, so I eventually posted to a 'comp.sources'
newsgroup.

   A few days after my posting, I got a friendly email from Arnold
introducing himself.  He suggested we share design and algorithms and
attached a draft of the POSIX standard so that I could update 'mawk' to
support language extensions added after publication of 'The AWK
Programming Language'.

   Frankly, if our roles had been reversed, I would not have been so
open and we probably would have never met.  I'm glad we did meet.  He is
an AWK expert's AWK expert and a genuinely nice person.  Arnold
contributes significant amounts of his expertise and time to the Free
Software Foundation.

   This book is the 'gawk' reference manual, but at its core it is a
book about AWK programming that will appeal to a wide audience.  It is a
definitive reference to the AWK language as defined by the 1987 Bell
Laboratories release and codified in the 1992 POSIX Utilities standard.

   On the other hand, the novice AWK programmer can study a wealth of
practical programs that emphasize the power of AWK's basic idioms:
data-driven control flow, pattern matching with regular expressions, and
associative arrays.  Those looking for something new can try out
'gawk''s interface to network protocols via special '/inet' files.

   The programs in this book make clear that an AWK program is typically
much smaller and faster to develop than a counterpart written in C.
Consequently, there is often a payoff to prototyping an algorithm or
design in AWK to get it running quickly and expose problems early.
Often, the interpreted performance is adequate and the AWK prototype
becomes the product.

   The new 'pgawk' (profiling 'gawk'), produces program execution
counts.  I recently experimented with an algorithm that for n lines of
input, exhibited ~ C n^2 performance, while theory predicted ~ C n log n
behavior.  A few minutes poring over the 'awkprof.out' profile
pinpointed the problem to a single line of code.  'pgawk' is a welcome
addition to my programmer's toolbox.

   Arnold has distilled over a decade of experience writing and using
AWK programs, and developing 'gawk', into this book.  If you use AWK or
want to learn how, then read this book.

     Michael Brennan
     Author of 'mawk'
     March 2001


File: gawk.info,  Node: Foreword4,  Next: Preface,  Prev: Foreword3,  Up: Top

Foreword to the Fourth Edition
******************************

Some things don't change.  Thirteen years ago I wrote: "If you use AWK
or want to learn how, then read this book."  True then, and still true
today.

   Learning to use a programming language is about more than mastering
the syntax.  One needs to acquire an understanding of how to use the
features of the language to solve practical programming problems.  A
focus of this book is many examples that show how to use AWK.

   Some things do change.  Our computers are much faster and have more
memory.  Consequently, speed and storage inefficiencies of a high-level
language matter less.  Prototyping in AWK and then rewriting in C for
performance reasons happens less, because more often the prototype is
fast enough.

   Of course, there are computing operations that are best done in C or
C++.  With 'gawk' 4.1 and later, you do not have to choose between
writing your program in AWK or in C/C++.  You can write most of your
program in AWK and the aspects that require C/C++ capabilities can be
written in C/C++, and then the pieces glued together when the 'gawk'
module loads the C/C++ module as a dynamic plug-in.  *note Dynamic
Extensions::, has all the details, and, as expected, many examples to
help you learn the ins and outs.

   I enjoy programming in AWK and had fun (re)reading this book.  I
think you will too.

     Michael Brennan
     Author of 'mawk'
     October 2014


File: gawk.info,  Node: Preface,  Next: Getting Started,  Prev: Foreword4,  Up: Top

Preface
*******

Several kinds of tasks occur repeatedly when working with text files.
You might want to extract certain lines and discard the rest.  Or you
may need to make changes wherever certain patterns appear, but leave the
rest of the file alone.  Such jobs are often easy with 'awk'.  The 'awk'
utility interprets a special-purpose programming language that makes it
easy to handle simple data-reformatting jobs.

   The GNU implementation of 'awk' is called 'gawk'; if you invoke it
with the proper options or environment variables, it is fully compatible
with the POSIX(1) specification of the 'awk' language and with the Unix
version of 'awk' maintained by Brian Kernighan.  This means that all
properly written 'awk' programs should work with 'gawk'.  So most of the
time, we don't distinguish between 'gawk' and other 'awk'
implementations.

   Using 'awk' you can:

   * Manage small, personal databases

   * Generate reports

   * Validate data

   * Produce indexes and perform other document-preparation tasks

   * Experiment with algorithms that you can adapt later to other
     computer languages

   In addition, 'gawk' provides facilities that make it easy to:

   * Extract bits and pieces of data for processing

   * Sort data

   * Perform simple network communications

   * Profile and debug 'awk' programs

   * Extend the language with functions written in C or C++

   This Info file teaches you about the 'awk' language and how you can
use it effectively.  You should already be familiar with basic system
commands, such as 'cat' and 'ls',(2) as well as basic shell facilities,
such as input/output (I/O) redirection and pipes.

   Implementations of the 'awk' language are available for many
different computing environments.  This Info file, while describing the
'awk' language in general, also describes the particular implementation
of 'awk' called 'gawk' (which stands for "GNU 'awk'").  'gawk' runs on a
broad range of Unix systems, ranging from Intel-architecture PC-based
computers up through large-scale systems.  'gawk' has also been ported
to Mac OS X, Microsoft Windows (all versions) and OS/2 PCs, and
OpenVMS.(3)

* Menu:

* History::                     The history of 'gawk' and
                                'awk'.
* Names::                       What name to use to find 'awk'.
* This Manual::                 Using this Info file. Includes sample
                                input files that you can use.
* Conventions::                 Typographical Conventions.
* Manual History::              Brief history of the GNU project and this
                                Info file.
* How To Contribute::           Helping to save the world.
* Acknowledgments::             Acknowledgments.

   ---------- Footnotes ----------

   (1) The 2008 POSIX standard is accessible online at
<http://www.opengroup.org/onlinepubs/9699919799/>.

   (2) These utilities are available on POSIX-compliant systems, as well
as on traditional Unix-based systems.  If you are using some other
operating system, you still need to be familiar with the ideas of I/O
redirection and pipes.

   (3) Some other, obsolete systems to which 'gawk' was once ported are
no longer supported and the code for those systems has been removed.


File: gawk.info,  Node: History,  Next: Names,  Up: Preface

History of 'awk' and 'gawk'
===========================

                   Recipe for a Programming Language

          1 part 'egrep'   1 part 'snobol'
          2 parts 'ed'     3 parts C

   Blend all parts well using 'lex' and 'yacc'.  Document minimally and
release.

   After eight years, add another part 'egrep' and two more parts C.
Document very well and release.

   The name 'awk' comes from the initials of its designers: Alfred V.
Aho, Peter J. Weinberger, and Brian W. Kernighan.  The original version
of 'awk' was written in 1977 at AT&T Bell Laboratories.  In 1985, a new
version made the programming language more powerful, introducing
user-defined functions, multiple input streams, and computed regular
expressions.  This new version became widely available with Unix System
V Release 3.1 (1987).  The version in System V Release 4 (1989) added
some new features and cleaned up the behavior in some of the "dark
corners" of the language.  The specification for 'awk' in the POSIX
Command Language and Utilities standard further clarified the language.
Both the 'gawk' designers and the original 'awk' designers at Bell
Laboratories provided feedback for the POSIX specification.

   Paul Rubin wrote 'gawk' in 1986.  Jay Fenlason completed it, with
advice from Richard Stallman.  John Woods contributed parts of the code
as well.  In 1988 and 1989, David Trueman, with help from me, thoroughly
reworked 'gawk' for compatibility with the newer 'awk'.  Circa 1994, I
became the primary maintainer.  Current development focuses on bug
fixes, performance improvements, standards compliance, and,
occasionally, new features.

   In May 1997, Ju"rgen Kahrs felt the need for network access from
'awk', and with a little help from me, set about adding features to do
this for 'gawk'.  At that time, he also wrote the bulk of 'TCP/IP
Internetworking with 'gawk'' (a separate document, available as part of
the 'gawk' distribution).  His code finally became part of the main
'gawk' distribution with 'gawk' version 3.1.

   John Haque rewrote the 'gawk' internals, in the process providing an
'awk'-level debugger.  This version became available as 'gawk' version
4.0 in 2011.

   *Note Contributors:: for a full list of those who have made important
contributions to 'gawk'.


File: gawk.info,  Node: Names,  Next: This Manual,  Prev: History,  Up: Preface

A Rose by Any Other Name
========================

The 'awk' language has evolved over the years.  Full details are
provided in *note Language History::.  The language described in this
Info file is often referred to as "new 'awk'."  By analogy, the original
version of 'awk' is referred to as "old 'awk'."

   On most current systems, when you run the 'awk' utility you get some
version of new 'awk'.(1)  If your system's standard 'awk' is the old
one, you will see something like this if you try the test program:

     $ awk 1 /dev/null
     error-> awk: syntax error near line 1
     error-> awk: bailing out near line 1

In this case, you should find a version of new 'awk', or just install
'gawk'!

   Throughout this Info file, whenever we refer to a language feature
that should be available in any complete implementation of POSIX 'awk',
we simply use the term 'awk'.  When referring to a feature that is
specific to the GNU implementation, we use the term 'gawk'.

   ---------- Footnotes ----------

   (1) Only Solaris systems still use an old 'awk' for the default 'awk'
utility.  A more modern 'awk' lives in '/usr/xpg6/bin' on these systems.


File: gawk.info,  Node: This Manual,  Next: Conventions,  Prev: Names,  Up: Preface

Using This Book
===============

The term 'awk' refers to a particular program as well as to the language
you use to tell this program what to do.  When we need to be careful, we
call the language "the 'awk' language," and the program "the 'awk'
utility."  This Info file explains both how to write programs in the
'awk' language and how to run the 'awk' utility.  The term "'awk'
program" refers to a program written by you in the 'awk' programming
language.

   Primarily, this Info file explains the features of 'awk' as defined
in the POSIX standard.  It does so in the context of the 'gawk'
implementation.  While doing so, it also attempts to describe important
differences between 'gawk' and other 'awk' implementations.(1)  Finally,
it notes any 'gawk' features that are not in the POSIX standard for
'awk'.

   There are sidebars scattered throughout the Info file.  They add a
more complete explanation of points that are relevant, but not likely to
be of interest on first reading.  All appear in the index, under the
heading "sidebar."

   Most of the time, the examples use complete 'awk' programs.  Some of
the more advanced minor nodes show only the part of the 'awk' program
that illustrates the concept being described.

   Although this Info file is aimed principally at people who have not
been exposed to 'awk', there is a lot of information here that even the
'awk' expert should find useful.  In particular, the description of
POSIX 'awk' and the example programs in *note Library Functions::, and
in *note Sample Programs::, should be of interest.

   This Info file is split into several parts, as follows:

   * Part I describes the 'awk' language and the 'gawk' program in
     detail.  It starts with the basics, and continues through all of
     the features of 'awk'.  It contains the following chapters:

        - *note Getting Started::, provides the essentials you need to
          know to begin using 'awk'.

        - *note Invoking Gawk::, describes how to run 'gawk', the
          meaning of its command-line options, and how it finds 'awk'
          program source files.

        - *note Regexp::, introduces regular expressions in general, and
          in particular the flavors supported by POSIX 'awk' and 'gawk'.

        - *note Reading Files::, describes how 'awk' reads your data.
          It introduces the concepts of records and fields, as well as
          the 'getline' command.  I/O redirection is first described
          here.  Network I/O is also briefly introduced here.

        - *note Printing::, describes how 'awk' programs can produce
          output with 'print' and 'printf'.

        - *note Expressions::, describes expressions, which are the
          basic building blocks for getting most things done in a
          program.

        - *note Patterns and Actions::, describes how to write patterns
          for matching records, actions for doing something when a
          record is matched, and the predefined variables 'awk' and
          'gawk' use.

        - *note Arrays::, covers 'awk''s one-and-only data structure:
          the associative array.  Deleting array elements and whole
          arrays is described, as well as sorting arrays in 'gawk'.  The
          major node also describes how 'gawk' provides arrays of
          arrays.

        - *note Functions::, describes the built-in functions 'awk' and
          'gawk' provide, as well as how to define your own functions.
          It also discusses how 'gawk' lets you call functions
          indirectly.

   * Part II shows how to use 'awk' and 'gawk' for problem solving.
     There is lots of code here for you to read and learn from.  This
     part contains the following chapters:

        - *note Library Functions::, provides a number of functions
          meant to be used from main 'awk' programs.

        - *note Sample Programs::, provides many sample 'awk' programs.

     Reading these two chapters allows you to see 'awk' solving real
     problems.

   * Part III focuses on features specific to 'gawk'.  It contains the
     following chapters:

        - *note Advanced Features::, describes a number of advanced
          features.  Of particular note are the abilities to control the
          order of array traversal, have two-way communications with
          another process, perform TCP/IP networking, and profile your
          'awk' programs.

        - *note Internationalization::, describes special features for
          translating program messages into different languages at
          runtime.

        - *note Debugger::, describes the 'gawk' debugger.

        - *note Arbitrary Precision Arithmetic::, describes advanced
          arithmetic facilities.

        - *note Dynamic Extensions::, describes how to add new variables
          and functions to 'gawk' by writing extensions in C or C++.

   * Part IV provides the appendices, the Glossary, and two licenses
     that cover the 'gawk' source code and this Info file, respectively.
     It contains the following appendices:

        - *note Language History::, describes how the 'awk' language has
          evolved since its first release to the present.  It also
          describes how 'gawk' has acquired features over time.

        - *note Installation::, describes how to get 'gawk', how to
          compile it on POSIX-compatible systems, and how to compile and
          use it on different non-POSIX systems.  It also describes how
          to report bugs in 'gawk' and where to get other freely
          available 'awk' implementations.

        - *note Notes::, describes how to disable 'gawk''s extensions,
          as well as how to contribute new code to 'gawk', and some
          possible future directions for 'gawk' development.

        - *note Basic Concepts::, provides some very cursory background
          material for those who are completely unfamiliar with computer
          programming.

          The *note Glossary::, defines most, if not all, of the
          significant terms used throughout the Info file.  If you find
          terms that you aren't familiar with, try looking them up here.

        - *note Copying::, and *note GNU Free Documentation License::,
          present the licenses that cover the 'gawk' source code and
          this Info file, respectively.

   ---------- Footnotes ----------

   (1) All such differences appear in the index under the entry
"differences in 'awk' and 'gawk'."


File: gawk.info,  Node: Conventions,  Next: Manual History,  Prev: This Manual,  Up: Preface

Typographical Conventions
=========================

This Info file is written in Texinfo
(http://www.gnu.org/software/texinfo/), the GNU documentation formatting
language.  A single Texinfo source file is used to produce both the
printed and online versions of the documentation.  This minor node
briefly documents the typographical conventions used in Texinfo.

   Examples you would type at the command line are preceded by the
common shell primary and secondary prompts, '$' and '>'.  Input that you
type is shown 'like this'.  Output from the command is preceded by the
glyph "-|".  This typically represents the command's standard output.
Error messages and other output on the command's standard error are
preceded by the glyph "error->".  For example:

     $ echo hi on stdout
     -| hi on stdout
     $ echo hello on stderr 1>&2
     error-> hello on stderr

   Characters that you type at the keyboard look 'like this'.  In
particular, there are special characters called "control characters."
These are characters that you type by holding down both the 'CONTROL'
key and another key, at the same time.  For example, a 'Ctrl-d' is typed
by first pressing and holding the 'CONTROL' key, next pressing the 'd'
key, and finally releasing both keys.

   For the sake of brevity, throughout this Info file, we refer to Brian
Kernighan's version of 'awk' as "BWK 'awk'."  (*Note Other Versions::
for information on his and other versions.)

Dark Corners
------------

     Dark corners are basically fractal--no matter how much you
     illuminate, there's always a smaller but darker one.
                         -- _Brian Kernighan_

   Until the POSIX standard (and 'GAWK: Effective AWK Programming'),
many features of 'awk' were either poorly documented or not documented
at all.  Descriptions of such features (often called "dark corners") are
noted in this Info file with "(d.c.)."  They also appear in the index
under the heading "dark corner."

   But, as noted by the opening quote, any coverage of dark corners is
by definition incomplete.

   Extensions to the standard 'awk' language that are supported by more
than one 'awk' implementation are marked "(c.e.)," and listed in the
index under "common extensions" and "extensions, common."


File: gawk.info,  Node: Manual History,  Next: How To Contribute,  Prev: Conventions,  Up: Preface

The GNU Project and This Book
=============================

The Free Software Foundation (FSF) is a nonprofit organization dedicated
to the production and distribution of freely distributable software.  It
was founded by Richard M. Stallman, the author of the original Emacs
editor.  GNU Emacs is the most widely used version of Emacs today.

   The GNU(1) Project is an ongoing effort on the part of the Free
Software Foundation to create a complete, freely distributable,
POSIX-compliant computing environment.  The FSF uses the GNU General
Public License (GPL) to ensure that its software's source code is always
available to the end user.  A copy of the GPL is included for your
reference (*note Copying::).  The GPL applies to the C language source
code for 'gawk'.  To find out more about the FSF and the GNU Project
online, see the GNU Project's home page (http://www.gnu.org).  This Info
file may also be read from GNU's website
(http://www.gnu.org/software/gawk/manual/).

   A shell, an editor (Emacs), highly portable optimizing C, C++, and
Objective-C compilers, a symbolic debugger and dozens of large and small
utilities (such as 'gawk'), have all been completed and are freely
available.  The GNU operating system kernel (the HURD), has been
released but remains in an early stage of development.

   Until the GNU operating system is more fully developed, you should
consider using GNU/Linux, a freely distributable, Unix-like operating
system for Intel, Power Architecture, Sun SPARC, IBM S/390, and other
systems.(2)  Many GNU/Linux distributions are available for download
from the Internet.

   The Info file itself has gone through multiple previous editions.
Paul Rubin wrote the very first draft of 'The GAWK Manual'; it was
around 40 pages long.  Diane Close and Richard Stallman improved it,
yielding a version that was around 90 pages and barely described the
original, "old" version of 'awk'.

   I started working with that version in the fall of 1988.  As work on
it progressed, the FSF published several preliminary versions (numbered
0.X).  In 1996, edition 1.0 was released with 'gawk' 3.0.0.  The FSF
published the first two editions under the title 'The GNU Awk User's
Guide'.

   This edition maintains the basic structure of the previous editions.
For FSF edition 4.0, the content was thoroughly reviewed and updated.
All references to 'gawk' versions prior to 4.0 were removed.  Of
significant note for that edition was the addition of *note Debugger::.

   For FSF edition 4.1, the content has been reorganized into parts, and
the major new additions are *note Arbitrary Precision Arithmetic::, and
*note Dynamic Extensions::.

   This Info file will undoubtedly continue to evolve.  If you find an
error in the Info file, please report it!  *Note Bugs:: for information
on submitting problem reports electronically.

   ---------- Footnotes ----------

   (1) GNU stands for "GNU's Not Unix."

   (2) The terminology "GNU/Linux" is explained in the *note Glossary::.


File: gawk.info,  Node: How To Contribute,  Next: Acknowledgments,  Prev: Manual History,  Up: Preface

How to Contribute
=================

As the maintainer of GNU 'awk', I once thought that I would be able to
manage a collection of publicly available 'awk' programs and I even
solicited contributions.  Making things available on the Internet helps
keep the 'gawk' distribution down to manageable size.

   The initial collection of material, such as it is, is still available
at <ftp://ftp.freefriends.org/arnold/Awkstuff>.  In the hopes of doing
something more broad, I acquired the 'awk.info' domain.

   However, I found that I could not dedicate enough time to managing
contributed code: the archive did not grow and the domain went unused
for several years.

   Late in 2008, a volunteer took on the task of setting up an
'awk'-related website--<http://awk.info>--and did a very nice job.

   If you have written an interesting 'awk' program, or have written a
'gawk' extension that you would like to share with the rest of the
world, please see <http://awk.info/?contribute> for how to contribute it
to the website.


File: gawk.info,  Node: Acknowledgments,  Prev: How To Contribute,  Up: Preface

Acknowledgments
===============

The initial draft of 'The GAWK Manual' had the following
acknowledgments:

     Many people need to be thanked for their assistance in producing
     this manual.  Jay Fenlason contributed many ideas and sample
     programs.  Richard Mlynarik and Robert Chassell gave helpful
     comments on drafts of this manual.  The paper 'A Supplemental
     Document for AWK' by John W. Pierce of the Chemistry Department at
     UC San Diego, pinpointed several issues relevant both to 'awk'
     implementation and to this manual, that would otherwise have
     escaped us.

   I would like to acknowledge Richard M. Stallman, for his vision of a
better world and for his courage in founding the FSF and starting the
GNU Project.

   Earlier editions of this Info file had the following
acknowledgements:

     The following people (in alphabetical order) provided helpful
     comments on various versions of this book: Rick Adams, Dr. Nelson
     H.F. Beebe, Karl Berry, Dr. Michael Brennan, Rich Burridge, Claire
     Cloutier, Diane Close, Scott Deifik, Christopher ("Topher") Eliot,
     Jeffrey Friedl, Dr. Darrel Hankerson, Michal Jaegermann, Dr.
     Richard J. LeBlanc, Michael Lijewski, Pat Rankin, Miriam Robbins,
     Mary Sheehan, and Chuck Toporek.

     Robert J. Chassell provided much valuable advice on the use of
     Texinfo.  He also deserves special thanks for convincing me _not_
     to title this Info file 'How to Gawk Politely'.  Karl Berry helped
     significantly with the TeX part of Texinfo.

     I would like to thank Marshall and Elaine Hartholz of Seattle and
     Dr. Bert and Rita Schreiber of Detroit for large amounts of quiet
     vacation time in their homes, which allowed me to make significant
     progress on this Info file and on 'gawk' itself.

     Phil Hughes of SSC contributed in a very important way by loaning
     me his laptop GNU/Linux system, not once, but twice, which allowed
     me to do a lot of work while away from home.

     David Trueman deserves special credit; he has done a yeoman job of
     evolving 'gawk' so that it performs well and without bugs.
     Although he is no longer involved with 'gawk', working with him on
     this project was a significant pleasure.

     The intrepid members of the GNITS mailing list, and most notably
     Ulrich Drepper, provided invaluable help and feedback for the
     design of the internationalization features.

     Chuck Toporek, Mary Sheehan, and Claire Cloutier of O'Reilly &
     Associates contributed significant editorial help for this Info
     file for the 3.1 release of 'gawk'.

   Dr. Nelson Beebe, Andreas Buening, Dr. Manuel Collado, Antonio
Colombo, Stephen Davies, Scott Deifik, Akim Demaille, Daniel Richard G.,
Darrel Hankerson, Michal Jaegermann, Ju"rgen Kahrs, Stepan Kasal, John
Malmberg, Dave Pitts, Chet Ramey, Pat Rankin, Andrew Schorr, Corinna
Vinschen, and Eli Zaretskii (in alphabetical order) make up the current
'gawk' "crack portability team."  Without their hard work and help,
'gawk' would not be nearly the robust, portable program it is today.  It
has been and continues to be a pleasure working with this team of fine
people.

   Notable code and documentation contributions were made by a number of
people.  *Note Contributors:: for the full list.

   Thanks to Michael Brennan for the Forewords.

   Thanks to Patrice Dumas for the new 'makeinfo' program.  Thanks to
Karl Berry, who continues to work to keep the Texinfo markup language
sane.

   Robert P.J. Day, Michael Brennan, and Brian Kernighan kindly acted as
reviewers for the 2015 edition of this Info file.  Their feedback helped
improve the final work.

   I would also like to thank Brian Kernighan for his invaluable
assistance during the testing and debugging of 'gawk', and for his
ongoing help and advice in clarifying numerous points about the
language.  We could not have done nearly as good a job on either 'gawk'
or its documentation without his help.

   Brian is in a class by himself as a programmer and technical author.
I have to thank him (yet again) for his ongoing friendship and for being
a role model to me for close to 30 years!  Having him as a reviewer is
an exciting privilege.  It has also been extremely humbling...

   I must thank my wonderful wife, Miriam, for her patience through the
many versions of this project, for her proofreading, and for sharing me
with the computer.  I would like to thank my parents for their love, and
for the grace with which they raised and educated me.  Finally, I also
must acknowledge my gratitude to G-d, for the many opportunities He has
sent my way, as well as for the gifts He has given me with which to take
advantage of those opportunities.


Arnold Robbins
Nof Ayalon
Israel
February 2015


File: gawk.info,  Node: Getting Started,  Next: Invoking Gawk,  Prev: Preface,  Up: Top

1 Getting Started with 'awk'
****************************

The basic function of 'awk' is to search files for lines (or other units
of text) that contain certain patterns.  When a line matches one of the
patterns, 'awk' performs specified actions on that line.  'awk'
continues to process input lines in this way until it reaches the end of
the input files.

   Programs in 'awk' are different from programs in most other
languages, because 'awk' programs are "data driven" (i.e., you describe
the data you want to work with and then what to do when you find it).
Most other languages are "procedural"; you have to describe, in great
detail, every step the program should take.  When working with
procedural languages, it is usually much harder to clearly describe the
data your program will process.  For this reason, 'awk' programs are
often refreshingly easy to read and write.

   When you run 'awk', you specify an 'awk' "program" that tells 'awk'
what to do.  The program consists of a series of "rules" (it may also
contain "function definitions", an advanced feature that we will ignore
for now; *note User-defined::).  Each rule specifies one pattern to
search for and one action to perform upon finding the pattern.

   Syntactically, a rule consists of a "pattern" followed by an
"action".  The action is enclosed in braces to separate it from the
pattern.  Newlines usually separate rules.  Therefore, an 'awk' program
looks like this:

     PATTERN { ACTION }
     PATTERN { ACTION }
     ...

* Menu:

* Running gawk::                How to run 'gawk' programs; includes
                                command-line syntax.
* Sample Data Files::           Sample data files for use in the 'awk'
                                programs illustrated in this Info file.
* Very Simple::                 A very simple example.
* Two Rules::                   A less simple one-line example using two
                                rules.
* More Complex::                A more complex example.
* Statements/Lines::            Subdividing or combining statements into
                                lines.
* Other Features::              Other Features of 'awk'.
* When::                        When to use 'gawk' and when to use
                                other things.
* Intro Summary::               Summary of the introduction.


File: gawk.info,  Node: Running gawk,  Next: Sample Data Files,  Up: Getting Started

1.1 How to Run 'awk' Programs
=============================

There are several ways to run an 'awk' program.  If the program is
short, it is easiest to include it in the command that runs 'awk', like
this:

     awk 'PROGRAM' INPUT-FILE1 INPUT-FILE2 ...

   When the program is long, it is usually more convenient to put it in
a file and run it with a command like this:

     awk -f PROGRAM-FILE INPUT-FILE1 INPUT-FILE2 ...

   This minor node discusses both mechanisms, along with several
variations of each.

* Menu:

* One-shot::                    Running a short throwaway 'awk'
                                program.
* Read Terminal::               Using no input files (input from the keyboard
                                instead).
* Long::                        Putting permanent 'awk' programs in
                                files.
* Executable Scripts::          Making self-contained 'awk' programs.
* Comments::                    Adding documentation to 'gawk'
                                programs.
* Quoting::                     More discussion of shell quoting issues.


File: gawk.info,  Node: One-shot,  Next: Read Terminal,  Up: Running gawk

1.1.1 One-Shot Throwaway 'awk' Programs
---------------------------------------

Once you are familiar with 'awk', you will often type in simple programs
the moment you want to use them.  Then you can write the program as the
first argument of the 'awk' command, like this:

     awk 'PROGRAM' INPUT-FILE1 INPUT-FILE2 ...

where PROGRAM consists of a series of patterns and actions, as described
earlier.

   This command format instructs the "shell", or command interpreter, to
start 'awk' and use the PROGRAM to process records in the input file(s).
There are single quotes around PROGRAM so the shell won't interpret any
'awk' characters as special shell characters.  The quotes also cause the
shell to treat all of PROGRAM as a single argument for 'awk', and allow
PROGRAM to be more than one line long.

   This format is also useful for running short or medium-sized 'awk'
programs from shell scripts, because it avoids the need for a separate
file for the 'awk' program.  A self-contained shell script is more
reliable because there are no other files to misplace.

   Later in this chapter, in *note Very Simple::, we'll see examples of
several short, self-contained programs.


File: gawk.info,  Node: Read Terminal,  Next: Long,  Prev: One-shot,  Up: Running gawk

1.1.2 Running 'awk' Without Input Files
---------------------------------------

You can also run 'awk' without any input files.  If you type the
following command line:

     awk 'PROGRAM'

'awk' applies the PROGRAM to the "standard input", which usually means
whatever you type on the keyboard.  This continues until you indicate
end-of-file by typing 'Ctrl-d'.  (On non-POSIX operating systems, the
end-of-file character may be different.  For example, on OS/2, it is
'Ctrl-z'.)

   As an example, the following program prints a friendly piece of
advice (from Douglas Adams's 'The Hitchhiker's Guide to the Galaxy'), to
keep you from worrying about the complexities of computer programming:

     $ awk 'BEGIN { print "Don\47t Panic!" }'
     -| Don't Panic!

   'awk' executes statements associated with 'BEGIN' before reading any
input.  If there are no other statements in your program, as is the case
here, 'awk' just stops, instead of trying to read input it doesn't know
how to process.  The '\47' is a magic way (explained later) of getting a
single quote into the program, without having to engage in ugly shell
quoting tricks.

     NOTE: If you use Bash as your shell, you should execute the command
     'set +H' before running this program interactively, to disable the
     C shell-style command history, which treats '!' as a special
     character.  We recommend putting this command into your personal
     startup file.

   This next simple 'awk' program emulates the 'cat' utility; it copies
whatever you type on the keyboard to its standard output (why this works
is explained shortly):

     $ awk '{ print }'
     Now is the time for all good men
     -| Now is the time for all good men
     to come to the aid of their country.
     -| to come to the aid of their country.
     Four score and seven years ago, ...
     -| Four score and seven years ago, ...
     What, me worry?
     -| What, me worry?
     Ctrl-d


File: gawk.info,  Node: Long,  Next: Executable Scripts,  Prev: Read Terminal,  Up: Running gawk

1.1.3 Running Long Programs
---------------------------

Sometimes 'awk' programs are very long.  In these cases, it is more
convenient to put the program into a separate file.  In order to tell
'awk' to use that file for its program, you type:

     awk -f SOURCE-FILE INPUT-FILE1 INPUT-FILE2 ...

   The '-f' instructs the 'awk' utility to get the 'awk' program from
the file SOURCE-FILE (*note Options::).  Any file name can be used for
SOURCE-FILE.  For example, you could put the program:

     BEGIN { print "Don't Panic!" }

into the file 'advice'.  Then this command:

     awk -f advice

does the same thing as this one:

     awk 'BEGIN { print "Don\47t Panic!" }'

This was explained earlier (*note Read Terminal::).  Note that you don't
usually need single quotes around the file name that you specify with
'-f', because most file names don't contain any of the shell's special
characters.  Notice that in 'advice', the 'awk' program did not have
single quotes around it.  The quotes are only needed for programs that
are provided on the 'awk' command line.  (Also, placing the program in a
file allows us to use a literal single quote in the program text,
instead of the magic '\47'.)

   If you want to clearly identify an 'awk' program file as such, you
can add the extension '.awk' to the file name.  This doesn't affect the
execution of the 'awk' program but it does make "housekeeping" easier.


File: gawk.info,  Node: Executable Scripts,  Next: Comments,  Prev: Long,  Up: Running gawk

1.1.4 Executable 'awk' Programs
-------------------------------

Once you have learned 'awk', you may want to write self-contained 'awk'
scripts, using the '#!' script mechanism.  You can do this on many
systems.(1)  For example, you could update the file 'advice' to look
like this:

     #! /bin/awk -f

     BEGIN { print "Don't Panic!" }

After making this file executable (with the 'chmod' utility), simply
type 'advice' at the shell and the system arranges to run 'awk' as if
you had typed 'awk -f advice':

     $ chmod +x advice
     $ advice
     -| Don't Panic!

(We assume you have the current directory in your shell's search path
variable [typically '$PATH'].  If not, you may need to type './advice'
at the shell.)

   Self-contained 'awk' scripts are useful when you want to write a
program that users can invoke without their having to know that the
program is written in 'awk'.

                          Understanding '#!'

   'awk' is an "interpreted" language.  This means that the 'awk'
utility reads your program and then processes your data according to the
instructions in your program.  (This is different from a "compiled"
language such as C, where your program is first compiled into machine
code that is executed directly by your system's processor.)  The 'awk'
utility is thus termed an "interpreter".  Many modern languages are
interpreted.

   The line beginning with '#!' lists the full file name of an
interpreter to run and a single optional initial command-line argument
to pass to that interpreter.  The operating system then runs the
interpreter with the given argument and the full argument list of the
executed program.  The first argument in the list is the full file name
of the 'awk' program.  The rest of the argument list contains either
options to 'awk', or data files, or both.  (Note that on many systems
'awk' may be found in '/usr/bin' instead of in '/bin'.)

   Some systems limit the length of the interpreter name to 32
characters.  Often, this can be dealt with by using a symbolic link.

   You should not put more than one argument on the '#!' line after the
path to 'awk'.  It does not work.  The operating system treats the rest
of the line as a single argument and passes it to 'awk'.  Doing this
leads to confusing behavior--most likely a usage diagnostic of some sort
from 'awk'.

   Finally, the value of 'ARGV[0]' (*note Built-in Variables::) varies
depending upon your operating system.  Some systems put 'awk' there,
some put the full pathname of 'awk' (such as '/bin/awk'), and some put
the name of your script ('advice').  (d.c.)  Don't rely on the value of
'ARGV[0]' to provide your script name.

   ---------- Footnotes ----------

   (1) The '#!' mechanism works on GNU/Linux systems, BSD-based systems,
and commercial Unix systems.


File: gawk.info,  Node: Comments,  Next: Quoting,  Prev: Executable Scripts,  Up: Running gawk

1.1.5 Comments in 'awk' Programs
--------------------------------

A "comment" is some text that is included in a program for the sake of
human readers; it is not really an executable part of the program.
Comments can explain what the program does and how it works.  Nearly all
programming languages have provisions for comments, as programs are
typically hard to understand without them.

   In the 'awk' language, a comment starts with the number sign
character ('#') and continues to the end of the line.  The '#' does not
have to be the first character on the line.  The 'awk' language ignores
the rest of a line following a number sign.  For example, we could have
put the following into 'advice':

     # This program prints a nice, friendly message.  It helps
     # keep novice users from being afraid of the computer.
     BEGIN    { print "Don't Panic!" }

   You can put comment lines into keyboard-composed throwaway 'awk'
programs, but this usually isn't very useful; the purpose of a comment
is to help you or another person understand the program when reading it
at a later time.

     CAUTION: As mentioned in *note One-shot::, you can enclose short to
     medium-sized programs in single quotes, in order to keep your shell
     scripts self-contained.  When doing so, _don't_ put an apostrophe
     (i.e., a single quote) into a comment (or anywhere else in your
     program).  The shell interprets the quote as the closing quote for
     the entire program.  As a result, usually the shell prints a
     message about mismatched quotes, and if 'awk' actually runs, it
     will probably print strange messages about syntax errors.  For
     example, look at the following:

          $ awk 'BEGIN { print "hello" } # let's be cute'
          >

     The shell sees that the first two quotes match, and that a new
     quoted object begins at the end of the command line.  It therefore
     prompts with the secondary prompt, waiting for more input.  With
     Unix 'awk', closing the quoted string produces this result:

          $ awk '{ print "hello" } # let's be cute'
          > '
          error-> awk: can't open file be
          error->  source line number 1

     Putting a backslash before the single quote in 'let's' wouldn't
     help, because backslashes are not special inside single quotes.
     The next node describes the shell's quoting rules.


File: gawk.info,  Node: Quoting,  Prev: Comments,  Up: Running gawk

1.1.6 Shell Quoting Issues
--------------------------

* Menu:

* DOS Quoting::                 Quoting in Windows Batch Files.

For short to medium-length 'awk' programs, it is most convenient to
enter the program on the 'awk' command line.  This is best done by
enclosing the entire program in single quotes.  This is true whether you
are entering the program interactively at the shell prompt, or writing
it as part of a larger shell script:

     awk 'PROGRAM TEXT' INPUT-FILE1 INPUT-FILE2 ...

   Once you are working with the shell, it is helpful to have a basic
knowledge of shell quoting rules.  The following rules apply only to
POSIX-compliant, Bourne-style shells (such as Bash, the GNU Bourne-Again
Shell).  If you use the C shell, you're on your own.

   Before diving into the rules, we introduce a concept that appears
throughout this Info file, which is that of the "null", or empty,
string.

   The null string is character data that has no value.  In other words,
it is empty.  It is written in 'awk' programs like this: '""'.  In the
shell, it can be written using single or double quotes: '""' or ''''.
Although the null string has no characters in it, it does exist.  For
example, consider this command:

     $ echo ""

Here, the 'echo' utility receives a single argument, even though that
argument has no characters in it.  In the rest of this Info file, we use
the terms "null string" and "empty string" interchangeably.  Now, on to
the quoting rules:

   * Quoted items can be concatenated with nonquoted items as well as
     with other quoted items.  The shell turns everything into one
     argument for the command.

   * Preceding any single character with a backslash ('\') quotes that
     character.  The shell removes the backslash and passes the quoted
     character on to the command.

   * Single quotes protect everything between the opening and closing
     quotes.  The shell does no interpretation of the quoted text,
     passing it on verbatim to the command.  It is _impossible_ to embed
     a single quote inside single-quoted text.  Refer back to *note
     Comments:: for an example of what happens if you try.

   * Double quotes protect most things between the opening and closing
     quotes.  The shell does at least variable and command substitution
     on the quoted text.  Different shells may do additional kinds of
     processing on double-quoted text.

     Because certain characters within double-quoted text are processed
     by the shell, they must be "escaped" within the text.  Of note are
     the characters '$', '`', '\', and '"', all of which must be
     preceded by a backslash within double-quoted text if they are to be
     passed on literally to the program.  (The leading backslash is
     stripped first.)  Thus, the example seen in *note Read Terminal:::

          awk 'BEGIN { print "Don\47t Panic!" }'

     could instead be written this way:

          $ awk "BEGIN { print \"Don't Panic!\" }"
          -| Don't Panic!

     Note that the single quote is not special within double quotes.

   * Null strings are removed when they occur as part of a non-null
     command-line argument, while explicit null objects are kept.  For
     example, to specify that the field separator 'FS' should be set to
     the null string, use:

          awk -F "" 'PROGRAM' FILES # correct

     Don't use this:

          awk -F"" 'PROGRAM' FILES  # wrong!

     In the second case, 'awk' attempts to use the text of the program
     as the value of 'FS', and the first file name as the text of the
     program!  This results in syntax errors at best, and confusing
     behavior at worst.

   Mixing single and double quotes is difficult.  You have to resort to
shell quoting tricks, like this:

     $ awk 'BEGIN { print "Here is a single quote <'"'"'>" }'
     -| Here is a single quote <'>

This program consists of three concatenated quoted strings.  The first
and the third are single-quoted, and the second is double-quoted.

   This can be "simplified" to:

     $ awk 'BEGIN { print "Here is a single quote <'\''>" }'
     -| Here is a single quote <'>

Judge for yourself which of these two is the more readable.

   Another option is to use double quotes, escaping the embedded,
'awk'-level double quotes:

     $ awk "BEGIN { print \"Here is a single quote <'>\" }"
     -| Here is a single quote <'>

This option is also painful, because double quotes, backslashes, and
dollar signs are very common in more advanced 'awk' programs.

   A third option is to use the octal escape sequence equivalents (*note
Escape Sequences::) for the single- and double-quote characters, like
so:

     $ awk 'BEGIN { print "Here is a single quote <\47>" }'
     -| Here is a single quote <'>
     $ awk 'BEGIN { print "Here is a double quote <\42>" }'
     -| Here is a double quote <">

This works nicely, but you should comment clearly what the escapes mean.

   A fourth option is to use command-line variable assignment, like
this:

     $ awk -v sq="'" 'BEGIN { print "Here is a single quote <" sq ">" }'
     -| Here is a single quote <'>

   (Here, the two string constants and the value of 'sq' are
concatenated into a single string that is printed by 'print'.)

   If you really need both single and double quotes in your 'awk'
program, it is probably best to move it into a separate file, where the
shell won't be part of the picture and you can say what you mean.


File: gawk.info,  Node: DOS Quoting,  Up: Quoting

1.1.6.1 Quoting in MS-Windows Batch Files
.........................................

Although this Info file generally only worries about POSIX systems and
the POSIX shell, the following issue arises often enough for many users
that it is worth addressing.

   The "shells" on Microsoft Windows systems use the double-quote
character for quoting, and make it difficult or impossible to include an
escaped double-quote character in a command-line script.  The following
example, courtesy of Jeroen Brink, shows how to print all lines in a
file surrounded by double quotes:

     gawk "{ print \"\042\" $0 \"\042\" }" FILE


File: gawk.info,  Node: Sample Data Files,  Next: Very Simple,  Prev: Running gawk,  Up: Getting Started

1.2 Data files for the Examples
===============================

Many of the examples in this Info file take their input from two sample
data files.  The first, 'mail-list', represents a list of peoples' names
together with their email addresses and information about those people.
The second data file, called 'inventory-shipped', contains information
about monthly shipments.  In both files, each line is considered to be
one "record".

   In 'mail-list', each record contains the name of a person, his/her
phone number, his/her email address, and a code for his/her relationship
with the author of the list.  The columns are aligned using spaces.  An
'A' in the last column means that the person is an acquaintance.  An 'F'
in the last column means that the person is a friend.  An 'R' means that
the person is a relative:

     Amelia       555-5553     amelia.zodiacusque@gmail.com    F
     Anthony      555-3412     anthony.asserturo@hotmail.com   A
     Becky        555-7685     becky.algebrarum@gmail.com      A
     Bill         555-1675     bill.drowning@hotmail.com       A
     Broderick    555-0542     broderick.aliquotiens@yahoo.com R
     Camilla      555-2912     camilla.infusarum@skynet.be     R
     Fabius       555-1234     fabius.undevicesimus@ucb.edu    F
     Julie        555-6699     julie.perscrutabor@skeeve.com   F
     Martin       555-6480     martin.codicibus@hotmail.com    A
     Samuel       555-3430     samuel.lanceolis@shu.edu        A
     Jean-Paul    555-2127     jeanpaul.campanorum@nyu.edu     R

   The data file 'inventory-shipped' represents information about
shipments during the year.  Each record contains the month, the number
of green crates shipped, the number of red boxes shipped, the number of
orange bags shipped, and the number of blue packages shipped,
respectively.  There are 16 entries, covering the 12 months of last year
and the first four months of the current year.  An empty line separates
the data for the two years:

     Jan  13  25  15 115
     Feb  15  32  24 226
     Mar  15  24  34 228
     Apr  31  52  63 420
     May  16  34  29 208
     Jun  31  42  75 492
     Jul  24  34  67 436
     Aug  15  34  47 316
     Sep  13  55  37 277
     Oct  29  54  68 525
     Nov  20  87  82 577
     Dec  17  35  61 401

     Jan  21  36  64 620
     Feb  26  58  80 652
     Mar  24  75  70 495
     Apr  21  70  74 514

   The sample files are included in the 'gawk' distribution, in the
directory 'awklib/eg/data'.


File: gawk.info,  Node: Very Simple,  Next: Two Rules,  Prev: Sample Data Files,  Up: Getting Started

1.3 Some Simple Examples
========================

The following command runs a simple 'awk' program that searches the
input file 'mail-list' for the character string 'li' (a grouping of
characters is usually called a "string"; the term "string" is based on
similar usage in English, such as "a string of pearls" or "a string of
cars in a train"):

     awk '/li/ { print $0 }' mail-list

When lines containing 'li' are found, they are printed because
'print $0' means print the current line.  (Just 'print' by itself means
the same thing, so we could have written that instead.)

   You will notice that slashes ('/') surround the string 'li' in the
'awk' program.  The slashes indicate that 'li' is the pattern to search
for.  This type of pattern is called a "regular expression", which is
covered in more detail later (*note Regexp::).  The pattern is allowed
to match parts of words.  There are single quotes around the 'awk'
program so that the shell won't interpret any of it as special shell
characters.

   Here is what this program prints:

     $ awk '/li/ { print $0 }' mail-list
     -| Amelia       555-5553     amelia.zodiacusque@gmail.com    F
     -| Broderick    555-0542     broderick.aliquotiens@yahoo.com R
     -| Julie        555-6699     julie.perscrutabor@skeeve.com   F
     -| Samuel       555-3430     samuel.lanceolis@shu.edu        A

   In an 'awk' rule, either the pattern or the action can be omitted,
but not both.  If the pattern is omitted, then the action is performed
for _every_ input line.  If the action is omitted, the default action is
to print all lines that match the pattern.

   Thus, we could leave out the action (the 'print' statement and the
braces) in the previous example and the result would be the same: 'awk'
prints all lines matching the pattern 'li'.  By comparison, omitting the
'print' statement but retaining the braces makes an empty action that
does nothing (i.e., no lines are printed).

   Many practical 'awk' programs are just a line or two long.  Following
is a collection of useful, short programs to get you started.  Some of
these programs contain constructs that haven't been covered yet.  (The
description of the program will give you a good idea of what is going
on, but you'll need to read the rest of the Info file to become an 'awk'
expert!)  Most of the examples use a data file named 'data'.  This is
just a placeholder; if you use these programs yourself, substitute your
own file names for 'data'.  For future reference, note that there is
often more than one way to do things in 'awk'.  At some point, you may
want to look back at these examples and see if you can come up with
different ways to do the same things shown here:

   * Print every line that is longer than 80 characters:

          awk 'length($0) > 80' data

     The sole rule has a relational expression as its pattern and has no
     action--so it uses the default action, printing the record.

   * Print the length of the longest input line:

          awk '{ if (length($0) > max) max = length($0) }
               END { print max }' data

     The code associated with 'END' executes after all input has been
     read; it's the other side of the coin to 'BEGIN'.

   * Print the length of the longest line in 'data':

          expand data | awk '{ if (x < length($0)) x = length($0) }
                             END { print "maximum line length is " x }'

     This example differs slightly from the previous one: the input is
     processed by the 'expand' utility to change TABs into spaces, so
     the widths compared are actually the right-margin columns, as
     opposed to the number of input characters on each line.

   * Print every line that has at least one field:

          awk 'NF > 0' data

     This is an easy way to delete blank lines from a file (or rather,
     to create a new file similar to the old file but from which the
     blank lines have been removed).

   * Print seven random numbers from 0 to 100, inclusive:

          awk 'BEGIN { for (i = 1; i <= 7; i++)
                           print int(101 * rand()) }'

   * Print the total number of bytes used by FILES:

          ls -l FILES | awk '{ x += $5 }
                             END { print "total bytes: " x }'

   * Print the total number of kilobytes used by FILES:

          ls -l FILES | awk '{ x += $5 }
             END { print "total K-bytes:", x / 1024 }'

   * Print a sorted list of the login names of all users:

          awk -F: '{ print $1 }' /etc/passwd | sort

   * Count the lines in a file:

          awk 'END { print NR }' data

   * Print the even-numbered lines in the data file:

          awk 'NR % 2 == 0' data

     If you used the expression 'NR % 2 == 1' instead, the program would
     print the odd-numbered lines.


File: gawk.info,  Node: Two Rules,  Next: More Complex,  Prev: Very Simple,  Up: Getting Started

1.4 An Example with Two Rules
=============================

The 'awk' utility reads the input files one line at a time.  For each
line, 'awk' tries the patterns of each rule.  If several patterns match,
then several actions execute in the order in which they appear in the
'awk' program.  If no patterns match, then no actions run.

   After processing all the rules that match the line (and perhaps there
are none), 'awk' reads the next line.  (However, *note Next Statement::
and also *note Nextfile Statement::.)  This continues until the program
reaches the end of the file.  For example, the following 'awk' program
contains two rules:

     /12/  { print $0 }
     /21/  { print $0 }

The first rule has the string '12' as the pattern and 'print $0' as the
action.  The second rule has the string '21' as the pattern and also has
'print $0' as the action.  Each rule's action is enclosed in its own
pair of braces.

   This program prints every line that contains the string '12' _or_ the
string '21'.  If a line contains both strings, it is printed twice, once
by each rule.

   This is what happens if we run this program on our two sample data
files, 'mail-list' and 'inventory-shipped':

     $ awk '/12/ { print $0 }
     >      /21/ { print $0 }' mail-list inventory-shipped
     -| Anthony      555-3412     anthony.asserturo@hotmail.com   A
     -| Camilla      555-2912     camilla.infusarum@skynet.be     R
     -| Fabius       555-1234     fabius.undevicesimus@ucb.edu    F
     -| Jean-Paul    555-2127     jeanpaul.campanorum@nyu.edu     R
     -| Jean-Paul    555-2127     jeanpaul.campanorum@nyu.edu     R
     -| Jan  21  36  64 620
     -| Apr  21  70  74 514

Note how the line beginning with 'Jean-Paul' in 'mail-list' was printed
twice, once for each rule.


File: gawk.info,  Node: More Complex,  Next: Statements/Lines,  Prev: Two Rules,  Up: Getting Started

1.5 A More Complex Example
==========================

Now that we've mastered some simple tasks, let's look at what typical
'awk' programs do.  This example shows how 'awk' can be used to
summarize, select, and rearrange the output of another utility.  It uses
features that haven't been covered yet, so don't worry if you don't
understand all the details:

     ls -l | awk '$6 == "Nov" { sum += $5 }
                  END { print sum }'

   This command prints the total number of bytes in all the files in the
current directory that were last modified in November (of any year).
The 'ls -l' part of this example is a system command that gives you a
listing of the files in a directory, including each file's size and the
date the file was last modified.  Its output looks like this:

     -rw-r--r--  1 arnold   user   1933 Nov  7 13:05 Makefile
     -rw-r--r--  1 arnold   user  10809 Nov  7 13:03 awk.h
     -rw-r--r--  1 arnold   user    983 Apr 13 12:14 awk.tab.h
     -rw-r--r--  1 arnold   user  31869 Jun 15 12:20 awkgram.y
     -rw-r--r--  1 arnold   user  22414 Nov  7 13:03 awk1.c
     -rw-r--r--  1 arnold   user  37455 Nov  7 13:03 awk2.c
     -rw-r--r--  1 arnold   user  27511 Dec  9 13:07 awk3.c
     -rw-r--r--  1 arnold   user   7989 Nov  7 13:03 awk4.c

The first field contains read-write permissions, the second field
contains the number of links to the file, and the third field identifies
the file's owner.  The fourth field identifies the file's group.  The
fifth field contains the file's size in bytes.  The sixth, seventh, and
eighth fields contain the month, day, and time, respectively, that the
file was last modified.  Finally, the ninth field contains the file
name.

   The '$6 == "Nov"' in our 'awk' program is an expression that tests
whether the sixth field of the output from 'ls -l' matches the string
'Nov'.  Each time a line has the string 'Nov' for its sixth field, 'awk'
performs the action 'sum += $5'.  This adds the fifth field (the file's
size) to the variable 'sum'.  As a result, when 'awk' has finished
reading all the input lines, 'sum' is the total of the sizes of the
files whose lines matched the pattern.  (This works because 'awk'
variables are automatically initialized to zero.)

   After the last line of output from 'ls' has been processed, the 'END'
rule executes and prints the value of 'sum'.  In this example, the value
of 'sum' is 80600.

   These more advanced 'awk' techniques are covered in later minor nodes
(*note Action Overview::).  Before you can move on to more advanced
'awk' programming, you have to know how 'awk' interprets your input and
displays your output.  By manipulating fields and using 'print'
statements, you can produce some very useful and impressive-looking
reports.


File: gawk.info,  Node: Statements/Lines,  Next: Other Features,  Prev: More Complex,  Up: Getting Started

1.6 'awk' Statements Versus Lines
=================================

Most often, each line in an 'awk' program is a separate statement or
separate rule, like this:

     awk '/12/  { print $0 }
          /21/  { print $0 }' mail-list inventory-shipped

   However, 'gawk' ignores newlines after any of the following symbols
and keywords:

     ,    {    ?    :    ||    &&    do    else

A newline at any other point is considered the end of the statement.(1)

   If you would like to split a single statement into two lines at a
point where a newline would terminate it, you can "continue" it by
ending the first line with a backslash character ('\').  The backslash
must be the final character on the line in order to be recognized as a
continuation character.  A backslash is allowed anywhere in the
statement, even in the middle of a string or regular expression.  For
example:

     awk '/This regular expression is too long, so continue it\
      on the next line/ { print $1 }'

We have generally not used backslash continuation in our sample
programs.  'gawk' places no limit on the length of a line, so backslash
continuation is never strictly necessary; it just makes programs more
readable.  For this same reason, as well as for clarity, we have kept
most statements short in the programs presented throughout the Info
file.  Backslash continuation is most useful when your 'awk' program is
in a separate source file instead of entered from the command line.  You
should also note that many 'awk' implementations are more particular
about where you may use backslash continuation.  For example, they may
not allow you to split a string constant using backslash continuation.
Thus, for maximum portability of your 'awk' programs, it is best not to
split your lines in the middle of a regular expression or a string.

     CAUTION: _Backslash continuation does not work as described with
     the C shell._  It works for 'awk' programs in files and for
     one-shot programs, _provided_ you are using a POSIX-compliant
     shell, such as the Unix Bourne shell or Bash.  But the C shell
     behaves differently!  There you must use two backslashes in a row,
     followed by a newline.  Note also that when using the C shell,
     _every_ newline in your 'awk' program must be escaped with a
     backslash.  To illustrate:

          % awk 'BEGIN { \
          ?   print \\
          ?       "hello, world" \
          ? }'
          -| hello, world

     Here, the '%' and '?' are the C shell's primary and secondary
     prompts, analogous to the standard shell's '$' and '>'.

     Compare the previous example to how it is done with a
     POSIX-compliant shell:

          $ awk 'BEGIN {
          >   print \
          >       "hello, world"
          > }'
          -| hello, world

   'awk' is a line-oriented language.  Each rule's action has to begin
on the same line as the pattern.  To have the pattern and action on
separate lines, you _must_ use backslash continuation; there is no other
option.

   Another thing to keep in mind is that backslash continuation and
comments do not mix.  As soon as 'awk' sees the '#' that starts a
comment, it ignores _everything_ on the rest of the line.  For example:

     $ gawk 'BEGIN { print "dont panic" # a friendly \
     >                                    BEGIN rule
     > }'
     error-> gawk: cmd. line:2:                BEGIN rule
     error-> gawk: cmd. line:2:                ^ syntax error

In this case, it looks like the backslash would continue the comment
onto the next line.  However, the backslash-newline combination is never
even noticed because it is "hidden" inside the comment.  Thus, the
'BEGIN' is noted as a syntax error.

   When 'awk' statements within one rule are short, you might want to
put more than one of them on a line.  This is accomplished by separating
the statements with a semicolon (';').  This also applies to the rules
themselves.  Thus, the program shown at the start of this minor node
could also be written this way:

     /12/ { print $0 } ; /21/ { print $0 }

     NOTE: The requirement that states that rules on the same line must
     be separated with a semicolon was not in the original 'awk'
     language; it was added for consistency with the treatment of
     statements within an action.

   ---------- Footnotes ----------

   (1) The '?' and ':' referred to here is the three-operand conditional
expression described in *note Conditional Exp::.  Splitting lines after
'?' and ':' is a minor 'gawk' extension; if '--posix' is specified
(*note Options::), then this extension is disabled.


File: gawk.info,  Node: Other Features,  Next: When,  Prev: Statements/Lines,  Up: Getting Started

1.7 Other Features of 'awk'
===========================

The 'awk' language provides a number of predefined, or "built-in",
variables that your programs can use to get information from 'awk'.
There are other variables your program can set as well to control how
'awk' processes your data.

   In addition, 'awk' provides a number of built-in functions for doing
common computational and string-related operations.  'gawk' provides
built-in functions for working with timestamps, performing bit
manipulation, for runtime string translation (internationalization),
determining the type of a variable, and array sorting.

   As we develop our presentation of the 'awk' language, we will
introduce most of the variables and many of the functions.  They are
described systematically in *note Built-in Variables:: and in *note
Built-in::.


File: gawk.info,  Node: When,  Next: Intro Summary,  Prev: Other Features,  Up: Getting Started

1.8 When to Use 'awk'
=====================

Now that you've seen some of what 'awk' can do, you might wonder how
'awk' could be useful for you.  By using utility programs, advanced
patterns, field separators, arithmetic statements, and other selection
criteria, you can produce much more complex output.  The 'awk' language
is very useful for producing reports from large amounts of raw data,
such as summarizing information from the output of other utility
programs like 'ls'.  (*Note More Complex::.)

   Programs written with 'awk' are usually much smaller than they would
be in other languages.  This makes 'awk' programs easy to compose and
use.  Often, 'awk' programs can be quickly composed at your keyboard,
used once, and thrown away.  Because 'awk' programs are interpreted, you
can avoid the (usually lengthy) compilation part of the typical
edit-compile-test-debug cycle of software development.

   Complex programs have been written in 'awk', including a complete
retargetable assembler for eight-bit microprocessors (*note Glossary::,
for more information), and a microcode assembler for a special-purpose
Prolog computer.  The original 'awk''s capabilities were strained by
tasks of such complexity, but modern versions are more capable.

   If you find yourself writing 'awk' scripts of more than, say, a few
hundred lines, you might consider using a different programming
language.  The shell is good at string and pattern matching; in
addition, it allows powerful use of the system utilities.  Python offers
a nice balance between high-level ease of programming and access to
system facilities.(1)

   ---------- Footnotes ----------

   (1) Other popular scripting languages include Ruby and Perl.


File: gawk.info,  Node: Intro Summary,  Prev: When,  Up: Getting Started

1.9 Summary
===========

   * Programs in 'awk' consist of PATTERN-ACTION pairs.

   * An ACTION without a PATTERN always runs.  The default ACTION for a
     pattern without one is '{ print $0 }'.

   * Use either 'awk 'PROGRAM' FILES' or 'awk -f PROGRAM-FILE FILES' to
     run 'awk'.

   * You may use the special '#!' header line to create 'awk' programs
     that are directly executable.

   * Comments in 'awk' programs start with '#' and continue to the end
     of the same line.

   * Be aware of quoting issues when writing 'awk' programs as part of a
     larger shell script (or MS-Windows batch file).

   * You may use backslash continuation to continue a source line.
     Lines are automatically continued after a comma, open brace,
     question mark, colon, '||', '&&', 'do', and 'else'.


File: gawk.info,  Node: Invoking Gawk,  Next: Regexp,  Prev: Getting Started,  Up: Top

2 Running 'awk' and 'gawk'
**************************

This major node covers how to run 'awk', both POSIX-standard and
'gawk'-specific command-line options, and what 'awk' and 'gawk' do with
nonoption arguments.  It then proceeds to cover how 'gawk' searches for
source files, reading standard input along with other files, 'gawk''s
environment variables, 'gawk''s exit status, using include files, and
obsolete and undocumented options and/or features.

   Many of the options and features described here are discussed in more
detail later in the Info file; feel free to skip over things in this
major node that don't interest you right now.

* Menu:

* Command Line::                How to run 'awk'.
* Options::                     Command-line options and their meanings.
* Other Arguments::             Input file names and variable assignments.
* Naming Standard Input::       How to specify standard input with other
                                files.
* Environment Variables::       The environment variables 'gawk' uses.
* Exit Status::                 'gawk''s exit status.
* Include Files::               Including other files into your program.
* Loading Shared Libraries::    Loading shared libraries into your program.
* Obsolete::                    Obsolete Options and/or features.
* Undocumented::                Undocumented Options and Features.
* Invoking Summary::            Invocation summary.


File: gawk.info,  Node: Command Line,  Next: Options,  Up: Invoking Gawk

2.1 Invoking 'awk'
==================

There are two ways to run 'awk'--with an explicit program or with one or
more program files.  Here are templates for both of them; items enclosed
in [...] in these templates are optional:

     'awk' [OPTIONS] '-f' PROGFILE ['--'] FILE ...
     'awk' [OPTIONS] ['--'] ''PROGRAM'' FILE ...

   In addition to traditional one-letter POSIX-style options, 'gawk'
also supports GNU long options.

   It is possible to invoke 'awk' with an empty program:

     awk '' datafile1 datafile2

Doing so makes little sense, though; 'awk' exits silently when given an
empty program.  (d.c.)  If '--lint' has been specified on the command
line, 'gawk' issues a warning that the program is empty.


File: gawk.info,  Node: Options,  Next: Other Arguments,  Prev: Command Line,  Up: Invoking Gawk

2.2 Command-Line Options
========================

Options begin with a dash and consist of a single character.  GNU-style
long options consist of two dashes and a keyword.  The keyword can be
abbreviated, as long as the abbreviation allows the option to be
uniquely identified.  If the option takes an argument, either the
keyword is immediately followed by an equals sign ('=') and the
argument's value, or the keyword and the argument's value are separated
by whitespace.  If a particular option with a value is given more than
once, it is the last value that counts.

   Each long option for 'gawk' has a corresponding POSIX-style short
option.  The long and short options are interchangeable in all contexts.
The following list describes options mandated by the POSIX standard:

'-F FS'
'--field-separator FS'
     Set the 'FS' variable to FS (*note Field Separators::).

'-f SOURCE-FILE'
'--file SOURCE-FILE'
     Read the 'awk' program source from SOURCE-FILE instead of in the
     first nonoption argument.  This option may be given multiple times;
     the 'awk' program consists of the concatenation of the contents of
     each specified SOURCE-FILE.

'-v VAR=VAL'
'--assign VAR=VAL'
     Set the variable VAR to the value VAL _before_ execution of the
     program begins.  Such variable values are available inside the
     'BEGIN' rule (*note Other Arguments::).

     The '-v' option can only set one variable, but it can be used more
     than once, setting another variable each time, like this: 'awk
     -v foo=1 -v bar=2 ...'.

          CAUTION: Using '-v' to set the values of the built-in
          variables may lead to surprising results.  'awk' will reset
          the values of those variables as it needs to, possibly
          ignoring any initial value you may have given.

'-W GAWK-OPT'
     Provide an implementation-specific option.  This is the POSIX
     convention for providing implementation-specific options.  These
     options also have corresponding GNU-style long options.  Note that
     the long options may be abbreviated, as long as the abbreviations
     remain unique.  The full list of 'gawk'-specific options is
     provided next.

'--'
     Signal the end of the command-line options.  The following
     arguments are not treated as options even if they begin with '-'.
     This interpretation of '--' follows the POSIX argument parsing
     conventions.

     This is useful if you have file names that start with '-', or in
     shell scripts, if you have file names that will be specified by the
     user that could start with '-'.  It is also useful for passing
     options on to the 'awk' program; see *note Getopt Function::.

   The following list describes 'gawk'-specific options:

'-b'
'--characters-as-bytes'
     Cause 'gawk' to treat all input data as single-byte characters.  In
     addition, all output written with 'print' or 'printf' is treated as
     single-byte characters.

     Normally, 'gawk' follows the POSIX standard and attempts to process
     its input data according to the current locale (*note Locales::).
     This can often involve converting multibyte characters into wide
     characters (internally), and can lead to problems or confusion if
     the input data does not contain valid multibyte characters.  This
     option is an easy way to tell 'gawk', "Hands off my data!"

'-c'
'--traditional'
     Specify "compatibility mode", in which the GNU extensions to the
     'awk' language are disabled, so that 'gawk' behaves just like BWK
     'awk'.  *Note POSIX/GNU::, which summarizes the extensions.  Also
     see *note Compatibility Mode::.

'-C'
'--copyright'
     Print the short version of the General Public License and then
     exit.

'-d'[FILE]
'--dump-variables'['='FILE]
     Print a sorted list of global variables, their types, and final
     values to FILE.  If no FILE is provided, print this list to a file
     named 'awkvars.out' in the current directory.  No space is allowed
     between the '-d' and FILE, if FILE is supplied.

     Having a list of all global variables is a good way to look for
     typographical errors in your programs.  You would also use this
     option if you have a large program with a lot of functions, and you
     want to be sure that your functions don't inadvertently use global
     variables that you meant to be local.  (This is a particularly easy
     mistake to make with simple variable names like 'i', 'j', etc.)

'-D'[FILE]
'--debug'['='FILE]
     Enable debugging of 'awk' programs (*note Debugging::).  By
     default, the debugger reads commands interactively from the
     keyboard (standard input).  The optional FILE argument allows you
     to specify a file with a list of commands for the debugger to
     execute noninteractively.  No space is allowed between the '-D' and
     FILE, if FILE is supplied.

'-e' PROGRAM-TEXT
'--source' PROGRAM-TEXT
     Provide program source code in the PROGRAM-TEXT.  This option
     allows you to mix source code in files with source code that you
     enter on the command line.  This is particularly useful when you
     have library functions that you want to use from your command-line
     programs (*note AWKPATH Variable::).

'-E' FILE
'--exec' FILE
     Similar to '-f', read 'awk' program text from FILE.  There are two
     differences from '-f':

        * This option terminates option processing; anything else on the
          command line is passed on directly to the 'awk' program.

        * Command-line variable assignments of the form 'VAR=VALUE' are
          disallowed.

     This option is particularly necessary for World Wide Web CGI
     applications that pass arguments through the URL; using this option
     prevents a malicious (or other) user from passing in options,
     assignments, or 'awk' source code (via '-e') to the CGI
     application.(1)  This option should be used with '#!' scripts
     (*note Executable Scripts::), like so:

          #! /usr/local/bin/gawk -E

          AWK PROGRAM HERE ...

'-g'
'--gen-pot'
     Analyze the source program and generate a GNU 'gettext' portable
     object template file on standard output for all string constants
     that have been marked for translation.  *Note
     Internationalization::, for information about this option.

'-h'
'--help'
     Print a "usage" message summarizing the short- and long-style
     options that 'gawk' accepts and then exit.

'-i' SOURCE-FILE
'--include' SOURCE-FILE
     Read an 'awk' source library from SOURCE-FILE.  This option is
     completely equivalent to using the '@include' directive inside your
     program.  It is very similar to the '-f' option, but there are two
     important differences.  First, when '-i' is used, the program
     source is not loaded if it has been previously loaded, whereas with
     '-f', 'gawk' always loads the file.  Second, because this option is
     intended to be used with code libraries, 'gawk' does not recognize
     such files as constituting main program input.  Thus, after
     processing an '-i' argument, 'gawk' still expects to find the main
     source code via the '-f' option or on the command line.

'-l' EXT
'--load' EXT
     Load a dynamic extension named EXT.  Extensions are stored as
     system shared libraries.  This option searches for the library
     using the 'AWKLIBPATH' environment variable.  The correct library
     suffix for your platform will be supplied by default, so it need
     not be specified in the extension name.  The extension
     initialization routine should be named 'dl_load()'.  An alternative
     is to use the '@load' keyword inside the program to load a shared
     library.  This advanced feature is described in detail in *note
     Dynamic Extensions::.

'-L'[VALUE]
'--lint'['='VALUE]
     Warn about constructs that are dubious or nonportable to other
     'awk' implementations.  No space is allowed between the '-L' and
     VALUE, if VALUE is supplied.  Some warnings are issued when 'gawk'
     first reads your program.  Others are issued at runtime, as your
     program executes.  With an optional argument of 'fatal', lint
     warnings become fatal errors.  This may be drastic, but its use
     will certainly encourage the development of cleaner 'awk' programs.
     With an optional argument of 'invalid', only warnings about things
     that are actually invalid are issued.  (This is not fully
     implemented yet.)

     Some warnings are only printed once, even if the dubious constructs
     they warn about occur multiple times in your 'awk' program.  Thus,
     when eliminating problems pointed out by '--lint', you should take
     care to search for all occurrences of each inappropriate construct.
     As 'awk' programs are usually short, doing so is not burdensome.

'-M'
'--bignum'
     Select arbitrary-precision arithmetic on numbers.  This option has
     no effect if 'gawk' is not compiled to use the GNU MPFR and MP
     libraries (*note Arbitrary Precision Arithmetic::).

'-n'
'--non-decimal-data'
     Enable automatic interpretation of octal and hexadecimal values in
     input data (*note Nondecimal Data::).

          CAUTION: This option can severely break old programs.  Use
          with care.  Also note that this option may disappear in a
          future version of 'gawk'.

'-N'
'--use-lc-numeric'
     Force the use of the locale's decimal point character when parsing
     numeric input data (*note Locales::).

'-o'[FILE]
'--pretty-print'['='FILE]
     Enable pretty-printing of 'awk' programs.  Implies '--no-optimize'.
     By default, the output program is created in a file named
     'awkprof.out' (*note Profiling::).  The optional FILE argument
     allows you to specify a different file name for the output.  No
     space is allowed between the '-o' and FILE, if FILE is supplied.

          NOTE: In the past, this option would also execute your
          program.  This is no longer the case.

'-O'
'--optimize'
     Enable 'gawk''s default optimizations on the internal
     representation of the program.  At the moment, this includes simple
     constant folding and tail recursion elimination in function calls.

     These optimizations are enabled by default.  This option remains
     primarily for backwards compatibility.  However, it may be used to
     cancel the effect of an earlier '-s' option (see later in this
     list).

'-p'[FILE]
'--profile'['='FILE]
     Enable profiling of 'awk' programs (*note Profiling::).  Implies
     '--no-optimize'.  By default, profiles are created in a file named
     'awkprof.out'.  The optional FILE argument allows you to specify a
     different file name for the profile file.  No space is allowed
     between the '-p' and FILE, if FILE is supplied.

     The profile contains execution counts for each statement in the
     program in the left margin, and function call counts for each
     function.

'-P'
'--posix'
     Operate in strict POSIX mode.  This disables all 'gawk' extensions
     (just like '--traditional') and disables all extensions not allowed
     by POSIX. *Note Common Extensions:: for a summary of the extensions
     in 'gawk' that are disabled by this option.  Also, the following
     additional restrictions apply:

        * Newlines are not allowed after '?' or ':' (*note Conditional
          Exp::).

        * Specifying '-Ft' on the command line does not set the value of
          'FS' to be a single TAB character (*note Field Separators::).

        * The locale's decimal point character is used for parsing input
          data (*note Locales::).

     If you supply both '--traditional' and '--posix' on the command
     line, '--posix' takes precedence.  'gawk' issues a warning if both
     options are supplied.

'-r'
'--re-interval'
     Allow interval expressions (*note Regexp Operators::) in regexps.
     This is now 'gawk''s default behavior.  Nevertheless, this option
     remains (both for backward compatibility and for use in combination
     with '--traditional').

'-s'
'--no-optimize'
     Disable 'gawk''s default optimizations on the internal
     representation of the program.

'-S'
'--sandbox'
     Disable the 'system()' function, input redirections with 'getline',
     output redirections with 'print' and 'printf', and dynamic
     extensions.  This is particularly useful when you want to run 'awk'
     scripts from questionable sources and need to make sure the scripts
     can't access your system (other than the specified input data
     file).

'-t'
'--lint-old'
     Warn about constructs that are not available in the original
     version of 'awk' from Version 7 Unix (*note V7/SVR3.1::).

'-V'
'--version'
     Print version information for this particular copy of 'gawk'.  This
     allows you to determine if your copy of 'gawk' is up to date with
     respect to whatever the Free Software Foundation is currently
     distributing.  It is also useful for bug reports (*note Bugs::).

   As long as program text has been supplied, any other options are
flagged as invalid with a warning message but are otherwise ignored.

   In compatibility mode, as a special case, if the value of FS supplied
to the '-F' option is 't', then 'FS' is set to the TAB character
('"\t"').  This is true only for '--traditional' and not for '--posix'
(*note Field Separators::).

   The '-f' option may be used more than once on the command line.  If
it is, 'awk' reads its program source from all of the named files, as if
they had been concatenated together into one big file.  This is useful
for creating libraries of 'awk' functions.  These functions can be
written once and then retrieved from a standard place, instead of having
to be included in each individual program.  The '-i' option is similar
in this regard.  (As mentioned in *note Definition Syntax::, function
names must be unique.)

   With standard 'awk', library functions can still be used, even if the
program is entered at the keyboard, by specifying '-f /dev/tty'.  After
typing your program, type 'Ctrl-d' (the end-of-file character) to
terminate it.  (You may also use '-f -' to read program source from the
standard input, but then you will not be able to also use the standard
input as a source of data.)

   Because it is clumsy using the standard 'awk' mechanisms to mix
source file and command-line 'awk' programs, 'gawk' provides the '-e'
option.  This does not require you to preempt the standard input for
your source code; it allows you to easily mix command-line and library
source code (*note AWKPATH Variable::).  As with '-f', the '-e' and '-i'
options may also be used multiple times on the command line.

   If no '-f' or '-e' option is specified, then 'gawk' uses the first
nonoption command-line argument as the text of the program source code.

   If the environment variable 'POSIXLY_CORRECT' exists, then 'gawk'
behaves in strict POSIX mode, exactly as if you had supplied '--posix'.
Many GNU programs look for this environment variable to suppress
extensions that conflict with POSIX, but 'gawk' behaves differently: it
suppresses all extensions, even those that do not conflict with POSIX,
and behaves in strict POSIX mode.  If '--lint' is supplied on the
command line and 'gawk' turns on POSIX mode because of
'POSIXLY_CORRECT', then it issues a warning message indicating that
POSIX mode is in effect.  You would typically set this variable in your
shell's startup file.  For a Bourne-compatible shell (such as Bash), you
would add these lines to the '.profile' file in your home directory:

     POSIXLY_CORRECT=true
     export POSIXLY_CORRECT

   For a C shell-compatible shell,(2) you would add this line to the
'.login' file in your home directory:

     setenv POSIXLY_CORRECT true

   Having 'POSIXLY_CORRECT' set is not recommended for daily use, but it
is good for testing the portability of your programs to other
environments.

   ---------- Footnotes ----------

   (1) For more detail, please see Section 4.4 of RFC 3875
(http://www.ietf.org/rfc/rfc3875).  Also see the explanatory note sent
to the 'gawk' bug mailing list
(http://lists.gnu.org/archive/html/bug-gawk/2014-11/msg00022.html).

   (2) Not recommended.


File: gawk.info,  Node: Other Arguments,  Next: Naming Standard Input,  Prev: Options,  Up: Invoking Gawk

2.3 Other Command-Line Arguments
================================

Any additional arguments on the command line are normally treated as
input files to be processed in the order specified.  However, an
argument that has the form 'VAR=VALUE', assigns the value VALUE to the
variable VAR--it does not specify a file at all.  (See *note Assignment
Options::.)  In the following example, COUNT=1 is a variable assignment,
not a file name:

     awk -f program.awk file1 count=1 file2

   All the command-line arguments are made available to your 'awk'
program in the 'ARGV' array (*note Built-in Variables::).  Command-line
options and the program text (if present) are omitted from 'ARGV'.  All
other arguments, including variable assignments, are included.  As each
element of 'ARGV' is processed, 'gawk' sets 'ARGIND' to the index in
'ARGV' of the current element.

   Changing 'ARGC' and 'ARGV' in your 'awk' program lets you control how
'awk' processes the input files; this is described in more detail in
*note ARGC and ARGV::.

   The distinction between file name arguments and variable-assignment
arguments is made when 'awk' is about to open the next input file.  At
that point in execution, it checks the file name to see whether it is
really a variable assignment; if so, 'awk' sets the variable instead of
reading a file.

   Therefore, the variables actually receive the given values after all
previously specified files have been read.  In particular, the values of
variables assigned in this fashion are _not_ available inside a 'BEGIN'
rule (*note BEGIN/END::), because such rules are run before 'awk' begins
scanning the argument list.

   The variable values given on the command line are processed for
escape sequences (*note Escape Sequences::).  (d.c.)

   In some very early implementations of 'awk', when a variable
assignment occurred before any file names, the assignment would happen
_before_ the 'BEGIN' rule was executed.  'awk''s behavior was thus
inconsistent; some command-line assignments were available inside the
'BEGIN' rule, while others were not.  Unfortunately, some applications
came to depend upon this "feature."  When 'awk' was changed to be more
consistent, the '-v' option was added to accommodate applications that
depended upon the old behavior.

   The variable assignment feature is most useful for assigning to
variables such as 'RS', 'OFS', and 'ORS', which control input and output
formats, before scanning the data files.  It is also useful for
controlling state if multiple passes are needed over a data file.  For
example:

     awk 'pass == 1  { PASS 1 STUFF }
          pass == 2  { PASS 2 STUFF }' pass=1 mydata pass=2 mydata

   Given the variable assignment feature, the '-F' option for setting
the value of 'FS' is not strictly necessary.  It remains for historical
compatibility.


File: gawk.info,  Node: Naming Standard Input,  Next: Environment Variables,  Prev: Other Arguments,  Up: Invoking Gawk

2.4 Naming Standard Input
=========================

Often, you may wish to read standard input together with other files.
For example, you may wish to read one file, read standard input coming
from a pipe, and then read another file.

   The way to name the standard input, with all versions of 'awk', is to
use a single, standalone minus sign or dash, '-'.  For example:

     SOME_COMMAND | awk -f myprog.awk file1 - file2

Here, 'awk' first reads 'file1', then it reads the output of
SOME_COMMAND, and finally it reads 'file2'.

   You may also use '"-"' to name standard input when reading files with
'getline' (*note Getline/File::).

   In addition, 'gawk' allows you to specify the special file name
'/dev/stdin', both on the command line and with 'getline'.  Some other
versions of 'awk' also support this, but it is not standard.  (Some
operating systems provide a '/dev/stdin' file in the filesystem;
however, 'gawk' always processes this file name itself.)


File: gawk.info,  Node: Environment Variables,  Next: Exit Status,  Prev: Naming Standard Input,  Up: Invoking Gawk

2.5 The Environment Variables 'gawk' Uses
=========================================

A number of environment variables influence how 'gawk' behaves.

* Menu:

* AWKPATH Variable::            Searching directories for 'awk'
                                programs.
* AWKLIBPATH Variable::         Searching directories for 'awk' shared
                                libraries.
* Other Environment Variables:: The environment variables.


File: gawk.info,  Node: AWKPATH Variable,  Next: AWKLIBPATH Variable,  Up: Environment Variables

2.5.1 The 'AWKPATH' Environment Variable
----------------------------------------

The previous minor node described how 'awk' program files can be named
on the command line with the '-f' option.  In most 'awk'
implementations, you must supply a precise pathname for each program
file, unless the file is in the current directory.  But with 'gawk', if
the file name supplied to the '-f' or '-i' options does not contain a
directory separator '/', then 'gawk' searches a list of directories
(called the "search path") one by one, looking for a file with the
specified name.

   The search path is a string consisting of directory names separated
by colons.(1)  'gawk' gets its search path from the 'AWKPATH'
environment variable.  If that variable does not exist, or if it has an
empty value, 'gawk' uses a default path (described shortly).

   The search path feature is particularly helpful for building
libraries of useful 'awk' functions.  The library files can be placed in
a standard directory in the default path and then specified on the
command line with a short file name.  Otherwise, you would have to type
the full file name for each file.

   By using the '-i' or '-f' options, your command-line 'awk' programs
can use facilities in 'awk' library files (*note Library Functions::).
Path searching is not done if 'gawk' is in compatibility mode.  This is
true for both '--traditional' and '--posix'.  *Note Options::.

   If the source code file is not found after the initial search, the
path is searched again after adding the suffix '.awk' to the file name.

   'gawk''s path search mechanism is similar to the shell's.  (See 'The
Bourne-Again SHell manual' (http://www.gnu.org/software/bash/manual/).)
It treats a null entry in the path as indicating the current directory.
(A null entry is indicated by starting or ending the path with a colon
or by placing two colons next to each other ['::'].)

     NOTE: To include the current directory in the path, either place
     '.' as an entry in the path or write a null entry in the path.

     Different past versions of 'gawk' would also look explicitly in the
     current directory, either before or after the path search.  As of
     version 4.1.2, this no longer happens; if you wish to look in the
     current directory, you must include '.' either as a separate entry
     or as a null entry in the search path.

   The default value for 'AWKPATH' is '.:/usr/local/share/awk'.(2)
Since '.' is included at the beginning, 'gawk' searches first in the
current directory and then in '/usr/local/share/awk'.  In practice, this
means that you will rarely need to change the value of 'AWKPATH'.

   *Note Shell Startup Files::, for information on functions that help
to manipulate the 'AWKPATH' variable.

   'gawk' places the value of the search path that it used into
'ENVIRON["AWKPATH"]'.  This provides access to the actual search path
value from within an 'awk' program.

   Although you can change 'ENVIRON["AWKPATH"]' within your 'awk'
program, this has no effect on the running program's behavior.  This
makes sense: the 'AWKPATH' environment variable is used to find the
program source files.  Once your program is running, all the files have
been found, and 'gawk' no longer needs to use 'AWKPATH'.

   ---------- Footnotes ----------

   (1) Semicolons on MS-Windows and MS-DOS.

   (2) Your version of 'gawk' may use a different directory; it will
depend upon how 'gawk' was built and installed.  The actual directory is
the value of '$(datadir)' generated when 'gawk' was configured.  You
probably don't need to worry about this, though.


File: gawk.info,  Node: AWKLIBPATH Variable,  Next: Other Environment Variables,  Prev: AWKPATH Variable,  Up: Environment Variables

2.5.2 The 'AWKLIBPATH' Environment Variable
-------------------------------------------

The 'AWKLIBPATH' environment variable is similar to the 'AWKPATH'
variable, but it is used to search for loadable extensions (stored as
system shared libraries) specified with the '-l' option rather than for
source files.  If the extension is not found, the path is searched again
after adding the appropriate shared library suffix for the platform.
For example, on GNU/Linux systems, the suffix '.so' is used.  The search
path specified is also used for extensions loaded via the '@load'
keyword (*note Loading Shared Libraries::).

   If 'AWKLIBPATH' does not exist in the environment, or if it has an
empty value, 'gawk' uses a default path; this is typically
'/usr/local/lib/gawk', although it can vary depending upon how 'gawk'
was built.

   *Note Shell Startup Files::, for information on functions that help
to manipulate the 'AWKLIBPATH' variable.

   'gawk' places the value of the search path that it used into
'ENVIRON["AWKLIBPATH"]'.  This provides access to the actual search path
value from within an 'awk' program.


File: gawk.info,  Node: Other Environment Variables,  Prev: AWKLIBPATH Variable,  Up: Environment Variables

2.5.3 Other Environment Variables
---------------------------------

A number of other environment variables affect 'gawk''s behavior, but
they are more specialized.  Those in the following list are meant to be
used by regular users:

'GAWK_MSEC_SLEEP'
     Specifies the interval between connection retries, in milliseconds.
     On systems that do not support the 'usleep()' system call, the
     value is rounded up to an integral number of seconds.

'GAWK_READ_TIMEOUT'
     Specifies the time, in milliseconds, for 'gawk' to wait for input
     before returning with an error.  *Note Read Timeout::.

'GAWK_SOCK_RETRIES'
     Controls the number of times 'gawk' attempts to retry a two-way
     TCP/IP (socket) connection before giving up.  *Note TCP/IP
     Networking::.  Note that when nonfatal I/O is enabled (*note
     Nonfatal::), 'gawk' only tries to open a TCP/IP socket once.

'POSIXLY_CORRECT'
     Causes 'gawk' to switch to POSIX-compatibility mode, disabling all
     traditional and GNU extensions.  *Note Options::.

   The environment variables in the following list are meant for use by
the 'gawk' developers for testing and tuning.  They are subject to
change.  The variables are:

'AWKBUFSIZE'
     This variable only affects 'gawk' on POSIX-compliant systems.  With
     a value of 'exact', 'gawk' uses the size of each input file as the
     size of the memory buffer to allocate for I/O. Otherwise, the value
     should be a number, and 'gawk' uses that number as the size of the
     buffer to allocate.  (When this variable is not set, 'gawk' uses
     the smaller of the file's size and the "default" blocksize, which
     is usually the filesystem's I/O blocksize.)

'AWK_HASH'
     If this variable exists with a value of 'gst', 'gawk' switches to
     using the hash function from GNU Smalltalk for managing arrays.
     This function may be marginally faster than the standard function.

'AWKREADFUNC'
     If this variable exists, 'gawk' switches to reading source files
     one line at a time, instead of reading in blocks.  This exists for
     debugging problems on filesystems on non-POSIX operating systems
     where I/O is performed in records, not in blocks.

'GAWK_MSG_SRC'
     If this variable exists, 'gawk' includes the file name and line
     number within the 'gawk' source code from which warning and/or
     fatal messages are generated.  Its purpose is to help isolate the
     source of a message, as there are multiple places that produce the
     same warning or error message.

'GAWK_LOCALE_DIR'
     Specifies the location of compiled message object files for 'gawk'
     itself.  This is passed to the 'bindtextdomain()' function when
     'gawk' starts up.

'GAWK_NO_DFA'
     If this variable exists, 'gawk' does not use the DFA regexp matcher
     for "does it match" kinds of tests.  This can cause 'gawk' to be
     slower.  Its purpose is to help isolate differences between the two
     regexp matchers that 'gawk' uses internally.  (There aren't
     supposed to be differences, but occasionally theory and practice
     don't coordinate with each other.)

'GAWK_STACKSIZE'
     This specifies the amount by which 'gawk' should grow its internal
     evaluation stack, when needed.

'INT_CHAIN_MAX'
     This specifies intended maximum number of items 'gawk' will
     maintain on a hash chain for managing arrays indexed by integers.

'STR_CHAIN_MAX'
     This specifies intended maximum number of items 'gawk' will
     maintain on a hash chain for managing arrays indexed by strings.

'TIDYMEM'
     If this variable exists, 'gawk' uses the 'mtrace()' library calls
     from the GNU C library to help track down possible memory leaks.


File: gawk.info,  Node: Exit Status,  Next: Include Files,  Prev: Environment Variables,  Up: Invoking Gawk

2.6 'gawk''s Exit Status
========================

If the 'exit' statement is used with a value (*note Exit Statement::),
then 'gawk' exits with the numeric value given to it.

   Otherwise, if there were no problems during execution, 'gawk' exits
with the value of the C constant 'EXIT_SUCCESS'.  This is usually zero.

   If an error occurs, 'gawk' exits with the value of the C constant
'EXIT_FAILURE'.  This is usually one.

   If 'gawk' exits because of a fatal error, the exit status is two.  On
non-POSIX systems, this value may be mapped to 'EXIT_FAILURE'.


File: gawk.info,  Node: Include Files,  Next: Loading Shared Libraries,  Prev: Exit Status,  Up: Invoking Gawk

2.7 Including Other Files into Your Program
===========================================

This minor node describes a feature that is specific to 'gawk'.

   The '@include' keyword can be used to read external 'awk' source
files.  This gives you the ability to split large 'awk' source files
into smaller, more manageable pieces, and also lets you reuse common
'awk' code from various 'awk' scripts.  In other words, you can group
together 'awk' functions used to carry out specific tasks into external
files.  These files can be used just like function libraries, using the
'@include' keyword in conjunction with the 'AWKPATH' environment
variable.  Note that source files may also be included using the '-i'
option.

   Let's see an example.  We'll start with two (trivial) 'awk' scripts,
namely 'test1' and 'test2'.  Here is the 'test1' script:

     BEGIN {
         print "This is script test1."
     }

and here is 'test2':

     @include "test1"
     BEGIN {
         print "This is script test2."
     }

   Running 'gawk' with 'test2' produces the following result:

     $ gawk -f test2
     -| This is script test1.
     -| This is script test2.

   'gawk' runs the 'test2' script, which includes 'test1' using the
'@include' keyword.  So, to include external 'awk' source files, you
just use '@include' followed by the name of the file to be included,
enclosed in double quotes.

     NOTE: Keep in mind that this is a language construct and the file
     name cannot be a string variable, but rather just a literal string
     constant in double quotes.

   The files to be included may be nested; e.g., given a third script,
namely 'test3':

     @include "test2"
     BEGIN {
         print "This is script test3."
     }

Running 'gawk' with the 'test3' script produces the following results:

     $ gawk -f test3
     -| This is script test1.
     -| This is script test2.
     -| This is script test3.

   The file name can, of course, be a pathname.  For example:

     @include "../io_funcs"

and:

     @include "/usr/awklib/network"

are both valid.  The 'AWKPATH' environment variable can be of great
value when using '@include'.  The same rules for the use of the
'AWKPATH' variable in command-line file searches (*note AWKPATH
Variable::) apply to '@include' also.

   This is very helpful in constructing 'gawk' function libraries.  If
you have a large script with useful, general-purpose 'awk' functions,
you can break it down into library files and put those files in a
special directory.  You can then include those "libraries," either by
using the full pathnames of the files, or by setting the 'AWKPATH'
environment variable accordingly and then using '@include' with just the
file part of the full pathname.  Of course, you can keep library files
in more than one directory; the more complex the working environment is,
the more directories you may need to organize the files to be included.

   Given the ability to specify multiple '-f' options, the '@include'
mechanism is not strictly necessary.  However, the '@include' keyword
can help you in constructing self-contained 'gawk' programs, thus
reducing the need for writing complex and tedious command lines.  In
particular, '@include' is very useful for writing CGI scripts to be run
from web pages.

   As mentioned in *note AWKPATH Variable::, the current directory is
always searched first for source files, before searching in 'AWKPATH';
this also applies to files named with '@include'.


File: gawk.info,  Node: Loading Shared Libraries,  Next: Obsolete,  Prev: Include Files,  Up: Invoking Gawk

2.8 Loading Dynamic Extensions into Your Program
================================================

This minor node describes a feature that is specific to 'gawk'.

   The '@load' keyword can be used to read external 'awk' extensions
(stored as system shared libraries).  This allows you to link in
compiled code that may offer superior performance and/or give you access
to extended capabilities not supported by the 'awk' language.  The
'AWKLIBPATH' variable is used to search for the extension.  Using
'@load' is completely equivalent to using the '-l' command-line option.

   If the extension is not initially found in 'AWKLIBPATH', another
search is conducted after appending the platform's default shared
library suffix to the file name.  For example, on GNU/Linux systems, the
suffix '.so' is used:

     $ gawk '@load "ordchr"; BEGIN {print chr(65)}'
     -| A

This is equivalent to the following example:

     $ gawk -lordchr 'BEGIN {print chr(65)}'
     -| A

For command-line usage, the '-l' option is more convenient, but '@load'
is useful for embedding inside an 'awk' source file that requires access
to an extension.

   *note Dynamic Extensions::, describes how to write extensions (in C
or C++) that can be loaded with either '@load' or the '-l' option.  It
also describes the 'ordchr' extension.


File: gawk.info,  Node: Obsolete,  Next: Undocumented,  Prev: Loading Shared Libraries,  Up: Invoking Gawk

2.9 Obsolete Options and/or Features
====================================

This minor node describes features and/or command-line options from
previous releases of 'gawk' that either are not available in the current
version or are still supported but deprecated (meaning that they will
_not_ be in the next release).

   The process-related special files '/dev/pid', '/dev/ppid',
'/dev/pgrpid', and '/dev/user' were deprecated in 'gawk' 3.1, but still
worked.  As of version 4.0, they are no longer interpreted specially by
'gawk'.  (Use 'PROCINFO' instead; see *note Auto-set::.)


File: gawk.info,  Node: Undocumented,  Next: Invoking Summary,  Prev: Obsolete,  Up: Invoking Gawk

2.10 Undocumented Options and Features
======================================

     Use the Source, Luke!
                             -- _Obi-Wan_

   This minor node intentionally left blank.


File: gawk.info,  Node: Invoking Summary,  Prev: Undocumented,  Up: Invoking Gawk

2.11 Summary
============

   * Use either 'awk 'PROGRAM' FILES' or 'awk -f PROGRAM-FILE FILES' to
     run 'awk'.

   * The three standard options for all versions of 'awk' are '-f',
     '-F', and '-v'.  'gawk' supplies these and many others, as well as
     corresponding GNU-style long options.

   * Nonoption command-line arguments are usually treated as file names,
     unless they have the form 'VAR=VALUE', in which case they are taken
     as variable assignments to be performed at that point in processing
     the input.

   * All nonoption command-line arguments, excluding the program text,
     are placed in the 'ARGV' array.  Adjusting 'ARGC' and 'ARGV'
     affects how 'awk' processes input.

   * You can use a single minus sign ('-') to refer to standard input on
     the command line.  'gawk' also lets you use the special file name
     '/dev/stdin'.

   * 'gawk' pays attention to a number of environment variables.
     'AWKPATH', 'AWKLIBPATH', and 'POSIXLY_CORRECT' are the most
     important ones.

   * 'gawk''s exit status conveys information to the program that
     invoked it.  Use the 'exit' statement from within an 'awk' program
     to set the exit status.

   * 'gawk' allows you to include other 'awk' source files into your
     program using the '@include' statement and/or the '-i' and '-f'
     command-line options.

   * 'gawk' allows you to load additional functions written in C or C++
     using the '@load' statement and/or the '-l' option.  (This advanced
     feature is described later, in *note Dynamic Extensions::.)


File: gawk.info,  Node: Regexp,  Next: Reading Files,  Prev: Invoking Gawk,  Up: Top

3 Regular Expressions
*********************

A "regular expression", or "regexp", is a way of describing a set of
strings.  Because regular expressions are such a fundamental part of
'awk' programming, their format and use deserve a separate major node.

   A regular expression enclosed in slashes ('/') is an 'awk' pattern
that matches every input record whose text belongs to that set.  The
simplest regular expression is a sequence of letters, numbers, or both.
Such a regexp matches any string that contains that sequence.  Thus, the
regexp 'foo' matches any string containing 'foo'.  Thus, the pattern
'/foo/' matches any input record containing the three adjacent
characters 'foo' _anywhere_ in the record.  Other kinds of regexps let
you specify more complicated classes of strings.

* Menu:

* Regexp Usage::                How to Use Regular Expressions.
* Escape Sequences::            How to write nonprinting characters.
* Regexp Operators::            Regular Expression Operators.
* Bracket Expressions::         What can go between '[...]'.
* Leftmost Longest::            How much text matches.
* Computed Regexps::            Using Dynamic Regexps.
* GNU Regexp Operators::        Operators specific to GNU software.
* Case-sensitivity::            How to do case-insensitive matching.
* Strong Regexp Constants::     Strongly typed regexp constants.
* Regexp Summary::              Regular expressions summary.


File: gawk.info,  Node: Regexp Usage,  Next: Escape Sequences,  Up: Regexp

3.1 How to Use Regular Expressions
==================================

A regular expression can be used as a pattern by enclosing it in
slashes.  Then the regular expression is tested against the entire text
of each record.  (Normally, it only needs to match some part of the text
in order to succeed.)  For example, the following prints the second
field of each record where the string 'li' appears anywhere in the
record:

     $ awk '/li/ { print $2 }' mail-list
     -| 555-5553
     -| 555-0542
     -| 555-6699
     -| 555-3430

   Regular expressions can also be used in matching expressions.  These
expressions allow you to specify the string to match against; it need
not be the entire current input record.  The two operators '~' and '!~'
perform regular expression comparisons.  Expressions using these
operators can be used as patterns, or in 'if', 'while', 'for', and 'do'
statements.  (*Note Statements::.)  For example, the following is true
if the expression EXP (taken as a string) matches REGEXP:

     EXP ~ /REGEXP/

This example matches, or selects, all input records with the uppercase
letter 'J' somewhere in the first field:

     $ awk '$1 ~ /J/' inventory-shipped
     -| Jan  13  25  15 115
     -| Jun  31  42  75 492
     -| Jul  24  34  67 436
     -| Jan  21  36  64 620

   So does this:

     awk '{ if ($1 ~ /J/) print }' inventory-shipped

   This next example is true if the expression EXP (taken as a character
string) does _not_ match REGEXP:

     EXP !~ /REGEXP/

   The following example matches, or selects, all input records whose
first field _does not_ contain the uppercase letter 'J':

     $ awk '$1 !~ /J/' inventory-shipped
     -| Feb  15  32  24 226
     -| Mar  15  24  34 228
     -| Apr  31  52  63 420
     -| May  16  34  29 208
     ...

   When a regexp is enclosed in slashes, such as '/foo/', we call it a
"regexp constant", much like '5.27' is a numeric constant and '"foo"' is
a string constant.


File: gawk.info,  Node: Escape Sequences,  Next: Regexp Operators,  Prev: Regexp Usage,  Up: Regexp

3.2 Escape Sequences
====================

Some characters cannot be included literally in string constants
('"foo"') or regexp constants ('/foo/').  Instead, they should be
represented with "escape sequences", which are character sequences
beginning with a backslash ('\').  One use of an escape sequence is to
include a double-quote character in a string constant.  Because a plain
double quote ends the string, you must use '\"' to represent an actual
double-quote character as a part of the string.  For example:

     $ awk 'BEGIN { print "He said \"hi!\" to her." }'
     -| He said "hi!" to her.

   The backslash character itself is another character that cannot be
included normally; you must write '\\' to put one backslash in the
string or regexp.  Thus, the string whose contents are the two
characters '"' and '\' must be written '"\"\\"'.

   Other escape sequences represent unprintable characters such as TAB
or newline.  There is nothing to stop you from entering most unprintable
characters directly in a string constant or regexp constant, but they
may look ugly.

   The following list presents all the escape sequences used in 'awk'
and what they represent.  Unless noted otherwise, all these escape
sequences apply to both string constants and regexp constants:

'\\'
     A literal backslash, '\'.

'\a'
     The "alert" character, 'Ctrl-g', ASCII code 7 (BEL). (This often
     makes some sort of audible noise.)

'\b'
     Backspace, 'Ctrl-h', ASCII code 8 (BS).

'\f'
     Formfeed, 'Ctrl-l', ASCII code 12 (FF).

'\n'
     Newline, 'Ctrl-j', ASCII code 10 (LF).

'\r'
     Carriage return, 'Ctrl-m', ASCII code 13 (CR).

'\t'
     Horizontal TAB, 'Ctrl-i', ASCII code 9 (HT).

'\v'
     Vertical TAB, 'Ctrl-k', ASCII code 11 (VT).

'\NNN'
     The octal value NNN, where NNN stands for 1 to 3 digits between '0'
     and '7'.  For example, the code for the ASCII ESC (escape)
     character is '\033'.

'\xHH...'
     The hexadecimal value HH, where HH stands for a sequence of
     hexadecimal digits ('0'-'9', and either 'A'-'F' or 'a'-'f').  A
     maximum of two digts are allowed after the '\x'.  Any further
     hexadecimal digits are treated as simple letters or numbers.
     (c.e.)  (The '\x' escape sequence is not allowed in POSIX awk.)

          CAUTION: In ISO C, the escape sequence continues until the
          first nonhexadecimal digit is seen.  For many years, 'gawk'
          would continue incorporating hexadecimal digits into the value
          until a non-hexadecimal digit or the end of the string was
          encountered.  However, using more than two hexadecimal digits
          produced undefined results.  As of version 4.2, only two
          digits are processed.

'\/'
     A literal slash (necessary for regexp constants only).  This
     sequence is used when you want to write a regexp constant that
     contains a slash (such as '/.*:\/home\/[[:alnum:]]+:.*/'; the
     '[[:alnum:]]' notation is discussed in *note Bracket
     Expressions::).  Because the regexp is delimited by slashes, you
     need to escape any slash that is part of the pattern, in order to
     tell 'awk' to keep processing the rest of the regexp.

'\"'
     A literal double quote (necessary for string constants only).  This
     sequence is used when you want to write a string constant that
     contains a double quote (such as '"He said \"hi!\" to her."').
     Because the string is delimited by double quotes, you need to
     escape any quote that is part of the string, in order to tell 'awk'
     to keep processing the rest of the string.

   In 'gawk', a number of additional two-character sequences that begin
with a backslash have special meaning in regexps.  *Note GNU Regexp
Operators::.

   In a regexp, a backslash before any character that is not in the
previous list and not listed in *note GNU Regexp Operators:: means that
the next character should be taken literally, even if it would normally
be a regexp operator.  For example, '/a\+b/' matches the three
characters 'a+b'.

   For complete portability, do not use a backslash before any character
not shown in the previous list or that is not an operator.

                  Backslash Before Regular Characters

   If you place a backslash in a string constant before something that
is not one of the characters previously listed, POSIX 'awk' purposely
leaves what happens as undefined.  There are two choices:

Strip the backslash out
     This is what BWK 'awk' and 'gawk' both do.  For example, '"a\qc"'
     is the same as '"aqc"'.  (Because this is such an easy bug both to
     introduce and to miss, 'gawk' warns you about it.)  Consider 'FS =
     "[ \t]+\|[ \t]+"' to use vertical bars surrounded by whitespace as
     the field separator.  There should be two backslashes in the
     string: 'FS = "[ \t]+\\|[ \t]+"'.)

Leave the backslash alone
     Some other 'awk' implementations do this.  In such implementations,
     typing '"a\qc"' is the same as typing '"a\\qc"'.

   To summarize:

   * The escape sequences in the preceding list are always processed
     first, for both string constants and regexp constants.  This
     happens very early, as soon as 'awk' reads your program.

   * 'gawk' processes both regexp constants and dynamic regexps (*note
     Computed Regexps::), for the special operators listed in *note GNU
     Regexp Operators::.

   * A backslash before any other character means to treat that
     character literally.

                  Escape Sequences for Metacharacters

   Suppose you use an octal or hexadecimal escape to represent a regexp
metacharacter.  (See *note Regexp Operators::.)  Does 'awk' treat the
character as a literal character or as a regexp operator?

   Historically, such characters were taken literally.  (d.c.)  However,
the POSIX standard indicates that they should be treated as real
metacharacters, which is what 'gawk' does.  In compatibility mode (*note
Options::), 'gawk' treats the characters represented by octal and
hexadecimal escape sequences literally when used in regexp constants.
Thus, '/a\52b/' is equivalent to '/a\*b/'.


File: gawk.info,  Node: Regexp Operators,  Next: Bracket Expressions,  Prev: Escape Sequences,  Up: Regexp

3.3 Regular Expression Operators
================================

You can combine regular expressions with special characters, called
"regular expression operators" or "metacharacters", to increase the
power and versatility of regular expressions.

   The escape sequences described in *note Escape Sequences:: are valid
inside a regexp.  They are introduced by a '\' and are recognized and
converted into corresponding real characters as the very first step in
processing regexps.

   Here is a list of metacharacters.  All characters that are not escape
sequences and that are not listed here stand for themselves:

'\'
     This suppresses the special meaning of a character when matching.
     For example, '\$' matches the character '$'.

'^'
     This matches the beginning of a string.  '^@chapter' matches
     '@chapter' at the beginning of a string, for example, and can be
     used to identify chapter beginnings in Texinfo source files.  The
     '^' is known as an "anchor", because it anchors the pattern to
     match only at the beginning of the string.

     It is important to realize that '^' does not match the beginning of
     a line (the point right after a '\n' newline character) embedded in
     a string.  The condition is not true in the following example:

          if ("line1\nLINE 2" ~ /^L/) ...

'$'
     This is similar to '^', but it matches only at the end of a string.
     For example, 'p$' matches a record that ends with a 'p'.  The '$'
     is an anchor and does not match the end of a line (the point right
     before a '\n' newline character) embedded in a string.  The
     condition in the following example is not true:

          if ("line1\nLINE 2" ~ /1$/) ...

'.' (period)
     This matches any single character, _including_ the newline
     character.  For example, '.P' matches any single character followed
     by a 'P' in a string.  Using concatenation, we can make a regular
     expression such as 'U.A', which matches any three-character
     sequence that begins with 'U' and ends with 'A'.

     In strict POSIX mode (*note Options::), '.' does not match the NUL
     character, which is a character with all bits equal to zero.
     Otherwise, NUL is just another character.  Other versions of 'awk'
     may not be able to match the NUL character.

'['...']'
     This is called a "bracket expression".(1)  It matches any _one_ of
     the characters that are enclosed in the square brackets.  For
     example, '[MVX]' matches any one of the characters 'M', 'V', or 'X'
     in a string.  A full discussion of what can be inside the square
     brackets of a bracket expression is given in *note Bracket
     Expressions::.

'[^'...']'
     This is a "complemented bracket expression".  The first character
     after the '[' _must_ be a '^'.  It matches any characters _except_
     those in the square brackets.  For example, '[^awk]' matches any
     character that is not an 'a', 'w', or 'k'.

'|'
     This is the "alternation operator" and it is used to specify
     alternatives.  The '|' has the lowest precedence of all the regular
     expression operators.  For example, '^P|[aeiouy]' matches any
     string that matches either '^P' or '[aeiouy]'.  This means it
     matches any string that starts with 'P' or contains (anywhere
     within it) a lowercase English vowel.

     The alternation applies to the largest possible regexps on either
     side.

'('...')'
     Parentheses are used for grouping in regular expressions, as in
     arithmetic.  They can be used to concatenate regular expressions
     containing the alternation operator, '|'.  For example,
     '@(samp|code)\{[^}]+\}' matches both '@code{foo}' and '@samp{bar}'.
     (These are Texinfo formatting control sequences.  The '+' is
     explained further on in this list.)

'*'
     This symbol means that the preceding regular expression should be
     repeated as many times as necessary to find a match.  For example,
     'ph*' applies the '*' symbol to the preceding 'h' and looks for
     matches of one 'p' followed by any number of 'h's.  This also
     matches just 'p' if no 'h's are present.

     There are two subtle points to understand about how '*' works.
     First, the '*' applies only to the single preceding regular
     expression component (e.g., in 'ph*', it applies just to the 'h').
     To cause '*' to apply to a larger subexpression, use parentheses:
     '(ph)*' matches 'ph', 'phph', 'phphph', and so on.

     Second, '*' finds as many repetitions as possible.  If the text to
     be matched is 'phhhhhhhhhhhhhhooey', 'ph*' matches all of the 'h's.

'+'
     This symbol is similar to '*', except that the preceding expression
     must be matched at least once.  This means that 'wh+y' would match
     'why' and 'whhy', but not 'wy', whereas 'wh*y' would match all
     three.

'?'
     This symbol is similar to '*', except that the preceding expression
     can be matched either once or not at all.  For example, 'fe?d'
     matches 'fed' and 'fd', but nothing else.

'{'N'}'
'{'N',}'
'{'N','M'}'
     One or two numbers inside braces denote an "interval expression".
     If there is one number in the braces, the preceding regexp is
     repeated N times.  If there are two numbers separated by a comma,
     the preceding regexp is repeated N to M times.  If there is one
     number followed by a comma, then the preceding regexp is repeated
     at least N times:

     'wh{3}y'
          Matches 'whhhy', but not 'why' or 'whhhhy'.

     'wh{3,5}y'
          Matches 'whhhy', 'whhhhy', or 'whhhhhy' only.

     'wh{2,}y'
          Matches 'whhy', 'whhhy', and so on.

     Interval expressions were not traditionally available in 'awk'.
     They were added as part of the POSIX standard to make 'awk' and
     'egrep' consistent with each other.

     Initially, because old programs may use '{' and '}' in regexp
     constants, 'gawk' did _not_ match interval expressions in regexps.

     However, beginning with version 4.0, 'gawk' does match interval
     expressions by default.  This is because compatibility with POSIX
     has become more important to most 'gawk' users than compatibility
     with old programs.

     For programs that use '{' and '}' in regexp constants, it is good
     practice to always escape them with a backslash.  Then the regexp
     constants are valid and work the way you want them to, using any
     version of 'awk'.(2)

     Finally, when '{' and '}' appear in regexp constants in a way that
     cannot be interpreted as an interval expression (such as '/q{a}/'),
     then they stand for themselves.

   In regular expressions, the '*', '+', and '?' operators, as well as
the braces '{' and '}', have the highest precedence, followed by
concatenation, and finally by '|'.  As in arithmetic, parentheses can
change how operators are grouped.

   In POSIX 'awk' and 'gawk', the '*', '+', and '?' operators stand for
themselves when there is nothing in the regexp that precedes them.  For
example, '/+/' matches a literal plus sign.  However, many other
versions of 'awk' treat such a usage as a syntax error.

   If 'gawk' is in compatibility mode (*note Options::), interval
expressions are not available in regular expressions.

   ---------- Footnotes ----------

   (1) In other literature, you may see a bracket expression referred to
as either a "character set", a "character class", or a "character list".

   (2) Use two backslashes if you're using a string constant with a
regexp operator or function.


File: gawk.info,  Node: Bracket Expressions,  Next: Leftmost Longest,  Prev: Regexp Operators,  Up: Regexp

3.4 Using Bracket Expressions
=============================

As mentioned earlier, a bracket expression matches any character among
those listed between the opening and closing square brackets.

   Within a bracket expression, a "range expression" consists of two
characters separated by a hyphen.  It matches any single character that
sorts between the two characters, based upon the system's native
character set.  For example, '[0-9]' is equivalent to '[0123456789]'.
(See *note Ranges and Locales:: for an explanation of how the POSIX
standard and 'gawk' have changed over time.  This is mainly of
historical interest.)

   With the increasing popularity of the Unicode character standard
(http://www.unicode.org), there is an additional wrinkle to consider.
Octal and hexadecimal escape sequences inside bracket expressions are
taken to represent only single-byte characters (characters whose values
fit within the range 0-256).  To match a range of characters where the
endpoints of the range are larger than 256, enter the multibyte
encodings of the characters directly.

   To include one of the characters '\', ']', '-', or '^' in a bracket
expression, put a '\' in front of it.  For example:

     [d\]]

matches either 'd' or ']'.  Additionally, if you place ']' right after
the opening '[', the closing bracket is treated as one of the characters
to be matched.

   The treatment of '\' in bracket expressions is compatible with other
'awk' implementations and is also mandated by POSIX. The regular
expressions in 'awk' are a superset of the POSIX specification for
Extended Regular Expressions (EREs).  POSIX EREs are based on the
regular expressions accepted by the traditional 'egrep' utility.

   "Character classes" are a feature introduced in the POSIX standard.
A character class is a special notation for describing lists of
characters that have a specific attribute, but the actual characters can
vary from country to country and/or from character set to character set.
For example, the notion of what is an alphabetic character differs
between the United States and France.

   A character class is only valid in a regexp _inside_ the brackets of
a bracket expression.  Character classes consist of '[:', a keyword
denoting the class, and ':]'.  *note Table 3.1: table-char-classes.
lists the character classes defined by the POSIX standard.

Class       Meaning
--------------------------------------------------------------------------
'[:alnum:]' Alphanumeric characters
'[:alpha:]' Alphabetic characters
'[:blank:]' Space and TAB characters
'[:cntrl:]' Control characters
'[:digit:]' Numeric characters
'[:graph:]' Characters that are both printable and visible (a space is
            printable but not visible, whereas an 'a' is both)
'[:lower:]' Lowercase alphabetic characters
'[:print:]' Printable characters (characters that are not control
            characters)
'[:punct:]' Punctuation characters (characters that are not letters,
            digits, control characters, or space characters)
'[:space:]' Space characters (such as space, TAB, and formfeed, to name
            a few)
'[:upper:]' Uppercase alphabetic characters
'[:xdigit:]'Characters that are hexadecimal digits

Table 3.1: POSIX character classes

   For example, before the POSIX standard, you had to write
'/[A-Za-z0-9]/' to match alphanumeric characters.  If your character set
had other alphabetic characters in it, this would not match them.  With
the POSIX character classes, you can write '/[[:alnum:]]/' to match the
alphabetic and numeric characters in your character set.

   Some utilities that match regular expressions provide a nonstandard
'[:ascii:]' character class; 'awk' does not.  However, you can simulate
such a construct using '[\x00-\x7F]'.  This matches all values
numerically between zero and 127, which is the defined range of the
ASCII character set.  Use a complemented character list ('[^\x00-\x7F]')
to match any single-byte characters that are not in the ASCII range.

   Two additional special sequences can appear in bracket expressions.
These apply to non-ASCII character sets, which can have single symbols
(called "collating elements") that are represented with more than one
character.  They can also have several characters that are equivalent
for "collating", or sorting, purposes.  (For example, in French, a plain
"e" and a grave-accented "e`" are equivalent.)  These sequences are:

Collating symbols
     Multicharacter collating elements enclosed between '[.' and '.]'.
     For example, if 'ch' is a collating element, then '[[.ch.]]' is a
     regexp that matches this collating element, whereas '[ch]' is a
     regexp that matches either 'c' or 'h'.

Equivalence classes
     Locale-specific names for a list of characters that are equal.  The
     name is enclosed between '[=' and '=]'.  For example, the name 'e'
     might be used to represent all of "e," "e^," "e`," and "e'."  In
     this case, '[[=e=]]' is a regexp that matches any of 'e', 'e^',
     'e'', or 'e`'.

   These features are very valuable in non-English-speaking locales.

     CAUTION: The library functions that 'gawk' uses for regular
     expression matching currently recognize only POSIX character
     classes; they do not recognize collating symbols or equivalence
     classes.

   Inside a bracket expression, an opening bracket ('[') that does not
start a character class, collating element or equivalence class is taken
literally.  This is also true of '.' and '*'.


File: gawk.info,  Node: Leftmost Longest,  Next: Computed Regexps,  Prev: Bracket Expressions,  Up: Regexp

3.5 How Much Text Matches?
==========================

Consider the following:

     echo aaaabcd | awk '{ sub(/a+/, "<A>"); print }'

   This example uses the 'sub()' function to make a change to the input
record.  ('sub()' replaces the first instance of any text matched by the
first argument with the string provided as the second argument; *note
String Functions::.)  Here, the regexp '/a+/' indicates "one or more 'a'
characters," and the replacement text is '<A>'.

   The input contains four 'a' characters.  'awk' (and POSIX) regular
expressions always match the leftmost, _longest_ sequence of input
characters that can match.  Thus, all four 'a' characters are replaced
with '<A>' in this example:

     $ echo aaaabcd | awk '{ sub(/a+/, "<A>"); print }'
     -| <A>bcd

   For simple match/no-match tests, this is not so important.  But when
doing text matching and substitutions with the 'match()', 'sub()',
'gsub()', and 'gensub()' functions, it is very important.  *Note String
Functions::, for more information on these functions.  Understanding
this principle is also important for regexp-based record and field
splitting (*note Records::, and also *note Field Separators::).


File: gawk.info,  Node: Computed Regexps,  Next: GNU Regexp Operators,  Prev: Leftmost Longest,  Up: Regexp

3.6 Using Dynamic Regexps
=========================

The righthand side of a '~' or '!~' operator need not be a regexp
constant (i.e., a string of characters between slashes).  It may be any
expression.  The expression is evaluated and converted to a string if
necessary; the contents of the string are then used as the regexp.  A
regexp computed in this way is called a "dynamic regexp" or a "computed
regexp":

     BEGIN { digits_regexp = "[[:digit:]]+" }
     $0 ~ digits_regexp    { print }

This sets 'digits_regexp' to a regexp that describes one or more digits,
and tests whether the input record matches this regexp.

     NOTE: When using the '~' and '!~' operators, be aware that there is
     a difference between a regexp constant enclosed in slashes and a
     string constant enclosed in double quotes.  If you are going to use
     a string constant, you have to understand that the string is, in
     essence, scanned _twice_: the first time when 'awk' reads your
     program, and the second time when it goes to match the string on
     the lefthand side of the operator with the pattern on the right.
     This is true of any string-valued expression (such as
     'digits_regexp', shown in the previous example), not just string
     constants.

   What difference does it make if the string is scanned twice?  The
answer has to do with escape sequences, and particularly with
backslashes.  To get a backslash into a regular expression inside a
string, you have to type two backslashes.

   For example, '/\*/' is a regexp constant for a literal '*'.  Only one
backslash is needed.  To do the same thing with a string, you have to
type '"\\*"'.  The first backslash escapes the second one so that the
string actually contains the two characters '\' and '*'.

   Given that you can use both regexp and string constants to describe
regular expressions, which should you use?  The answer is "regexp
constants," for several reasons:

   * String constants are more complicated to write and more difficult
     to read.  Using regexp constants makes your programs less
     error-prone.  Not understanding the difference between the two
     kinds of constants is a common source of errors.

   * It is more efficient to use regexp constants.  'awk' can note that
     you have supplied a regexp and store it internally in a form that
     makes pattern matching more efficient.  When using a string
     constant, 'awk' must first convert the string into this internal
     form and then perform the pattern matching.

   * Using regexp constants is better form; it shows clearly that you
     intend a regexp match.

         Using '\n' in Bracket Expressions of Dynamic Regexps

   Some older versions of 'awk' do not allow the newline character to be
used inside a bracket expression for a dynamic regexp:

     $ awk '$0 ~ "[ \t\n]"'
     error-> awk: newline in character class [
     error-> ]...
     error->  source line number 1
     error->  context is
     error->        $0 ~ "[ >>>  \t\n]" <<<

   But a newline in a regexp constant works with no problem:

     $ awk '$0 ~ /[ \t\n]/'
     here is a sample line
     -| here is a sample line
     Ctrl-d

   'gawk' does not have this problem, and it isn't likely to occur often
in practice, but it's worth noting for future reference.


File: gawk.info,  Node: GNU Regexp Operators,  Next: Case-sensitivity,  Prev: Computed Regexps,  Up: Regexp

3.7 'gawk'-Specific Regexp Operators
====================================

GNU software that deals with regular expressions provides a number of
additional regexp operators.  These operators are described in this
minor node and are specific to 'gawk'; they are not available in other
'awk' implementations.  Most of the additional operators deal with word
matching.  For our purposes, a "word" is a sequence of one or more
letters, digits, or underscores ('_'):

'\s'
     Matches any whitespace character.  Think of it as shorthand for
     '[[:space:]]'.

'\S'
     Matches any character that is not whitespace.  Think of it as
     shorthand for '[^[:space:]]'.

'\w'
     Matches any word-constituent character--that is, it matches any
     letter, digit, or underscore.  Think of it as shorthand for
     '[[:alnum:]_]'.

'\W'
     Matches any character that is not word-constituent.  Think of it as
     shorthand for '[^[:alnum:]_]'.

'\<'
     Matches the empty string at the beginning of a word.  For example,
     '/\<away/' matches 'away' but not 'stowaway'.

'\>'
     Matches the empty string at the end of a word.  For example,
     '/stow\>/' matches 'stow' but not 'stowaway'.

'\y'
     Matches the empty string at either the beginning or the end of a
     word (i.e., the word boundar*y*).  For example, '\yballs?\y'
     matches either 'ball' or 'balls', as a separate word.

'\B'
     Matches the empty string that occurs between two word-constituent
     characters.  For example, '/\Brat\B/' matches 'crate', but it does
     not match 'dirty rat'.  '\B' is essentially the opposite of '\y'.

   There are two other operators that work on buffers.  In Emacs, a
"buffer" is, naturally, an Emacs buffer.  Other GNU programs, including
'gawk', consider the entire string to match as the buffer.  The
operators are:

'\`'
     Matches the empty string at the beginning of a buffer (string)

'\''
     Matches the empty string at the end of a buffer (string)

   Because '^' and '$' always work in terms of the beginning and end of
strings, these operators don't add any new capabilities for 'awk'.  They
are provided for compatibility with other GNU software.

   In other GNU software, the word-boundary operator is '\b'.  However,
that conflicts with the 'awk' language's definition of '\b' as
backspace, so 'gawk' uses a different letter.  An alternative method
would have been to require two backslashes in the GNU operators, but
this was deemed too confusing.  The current method of using '\y' for the
GNU '\b' appears to be the lesser of two evils.

   The various command-line options (*note Options::) control how 'gawk'
interprets characters in regexps:

No options
     In the default case, 'gawk' provides all the facilities of POSIX
     regexps and the GNU regexp operators described in *note Regexp
     Operators::.

'--posix'
     Match only POSIX regexps; the GNU operators are not special (e.g.,
     '\w' matches a literal 'w').  Interval expressions are allowed.

'--traditional'
     Match traditional Unix 'awk' regexps.  The GNU operators are not
     special, and interval expressions are not available.  Because BWK
     'awk' supports them, the POSIX character classes ('[[:alnum:]]',
     etc.)  are available.  Characters described by octal and
     hexadecimal escape sequences are treated literally, even if they
     represent regexp metacharacters.

'--re-interval'
     Allow interval expressions in regexps, if '--traditional' has been
     provided.  Otherwise, interval expressions are available by
     default.


File: gawk.info,  Node: Case-sensitivity,  Next: Strong Regexp Constants,  Prev: GNU Regexp Operators,  Up: Regexp

3.8 Case Sensitivity in Matching
================================

Case is normally significant in regular expressions, both when matching
ordinary characters (i.e., not metacharacters) and inside bracket
expressions.  Thus, a 'w' in a regular expression matches only a
lowercase 'w' and not an uppercase 'W'.

   The simplest way to do a case-independent match is to use a bracket
expression--for example, '[Ww]'.  However, this can be cumbersome if you
need to use it often, and it can make the regular expressions harder to
read.  There are two alternatives that you might prefer.

   One way to perform a case-insensitive match at a particular point in
the program is to convert the data to a single case, using the
'tolower()' or 'toupper()' built-in string functions (which we haven't
discussed yet; *note String Functions::).  For example:

     tolower($1) ~ /foo/  { ... }

converts the first field to lowercase before matching against it.  This
works in any POSIX-compliant 'awk'.

   Another method, specific to 'gawk', is to set the variable
'IGNORECASE' to a nonzero value (*note Built-in Variables::).  When
'IGNORECASE' is not zero, _all_ regexp and string operations ignore
case.

   Changing the value of 'IGNORECASE' dynamically controls the case
sensitivity of the program as it runs.  Case is significant by default
because 'IGNORECASE' (like most variables) is initialized to zero:

     x = "aB"
     if (x ~ /ab/) ...   # this test will fail

     IGNORECASE = 1
     if (x ~ /ab/) ...   # now it will succeed

   In general, you cannot use 'IGNORECASE' to make certain rules case
insensitive and other rules case sensitive, as there is no
straightforward way to set 'IGNORECASE' just for the pattern of a
particular rule.(1)  To do this, use either bracket expressions or
'tolower()'.  However, one thing you can do with 'IGNORECASE' only is
dynamically turn case sensitivity on or off for all the rules at once.

   'IGNORECASE' can be set on the command line or in a 'BEGIN' rule
(*note Other Arguments::; also *note Using BEGIN/END::).  Setting
'IGNORECASE' from the command line is a way to make a program case
insensitive without having to edit it.

   In multibyte locales, the equivalences between upper- and lowercase
characters are tested based on the wide-character values of the locale's
character set.  Otherwise, the characters are tested based on the
ISO-8859-1 (ISO Latin-1) character set.  This character set is a
superset of the traditional 128 ASCII characters, which also provides a
number of characters suitable for use with European languages.(2)

   The value of 'IGNORECASE' has no effect if 'gawk' is in compatibility
mode (*note Options::).  Case is always significant in compatibility
mode.

   ---------- Footnotes ----------

   (1) Experienced C and C++ programmers will note that it is possible,
using something like 'IGNORECASE = 1 && /foObAr/ { ... }' and
'IGNORECASE = 0 || /foobar/ { ... }'.  However, this is somewhat obscure
and we don't recommend it.

   (2) If you don't understand this, don't worry about it; it just means
that 'gawk' does the right thing.


File: gawk.info,  Node: Strong Regexp Constants,  Next: Regexp Summary,  Prev: Case-sensitivity,  Up: Regexp

3.9 Strongly Typed Regexp Constants
===================================

This minor node describes a 'gawk'-specific feature.

   Regexp constants ('/.../') hold a strange position in the 'awk'
language.  In most contexts, they act like an expression: '$0 ~ /.../'.
In other contexts, they denote only a regexp to be matched.  In no case
are they really a "first class citizen" of the language.  That is, you
cannot define a scalar variable whose type is "regexp" in the same sense
that you can define a variable to be a number or a string:

     num = 42        Numeric variable
     str = "hi"      String variable
     re = /foo/      Wrong! re is the result of $0 ~ /foo/

   For a number of more advanced use cases (described later on in this
Info file), it would be nice to have regexp constants that are "strongly
typed"; in other words, that denote a regexp useful for matching, and
not an expression.

   'gawk' provides this feature.  A strongly typed regexp constant looks
almost like a regular regexp constant, except that it is preceded by an
'@' sign:

     re = @/foo/     Regexp variable

   Strongly typed regexp constants _cannot_ be used eveywhere that a
regular regexp constant can, because this would make the language even
more confusing.  Instead, you may use them only in certain contexts:

   * On the righthand side of the '~' and '!~' operators: 'some_var ~
     @/foo/' (*note Regexp Usage::).

   * In the 'case' part of a 'switch' statement (*note Switch
     Statement::).

   * As an argument to one of the built-in functions that accept regexp
     constants: 'gensub()', 'gsub()', 'match()', 'patsplit()',
     'split()', and 'sub()' (*note String Functions::).

   * As a parameter in a call to a user-defined function (*note
     User-defined::).

   * On the righthand side of an assignment to a variable: 'some_var =
     @/foo/'.  In this case, the type of 'some_var' is regexp.
     Additionally, 'some_var' can be used with '~' and '!~', passed to
     one of the built-in functions listed above, or passed as a
     parameter to a user-defined function.

   You may use the 'typeof()' built-in function (*note Type Functions::)
to determine if a variable or function parameter is a regexp variable.

   The true power of this feature comes from the ability to create
variables that have regexp type.  Such variables can be passed on to
user-defined functions, without the confusing aspects of computed
regular expressions created from strings or string constants.  They may
also be passed through indirect function calls (*note Indirect Calls::)
onto the built-in functions that accept regexp constants.

   When used in numeric conversions, strongly typed regexp variables
convert to zero.  When used in string conversions, they convert to the
string value of the original regexp text.


File: gawk.info,  Node: Regexp Summary,  Prev: Strong Regexp Constants,  Up: Regexp

3.10 Summary
============

   * Regular expressions describe sets of strings to be matched.  In
     'awk', regular expression constants are written enclosed between
     slashes: '/'...'/'.

   * Regexp constants may be used standalone in patterns and in
     conditional expressions, or as part of matching expressions using
     the '~' and '!~' operators.

   * Escape sequences let you represent nonprintable characters and also
     let you represent regexp metacharacters as literal characters to be
     matched.

   * Regexp operators provide grouping, alternation, and repetition.

   * Bracket expressions give you a shorthand for specifying sets of
     characters that can match at a particular point in a regexp.
     Within bracket expressions, POSIX character classes let you specify
     certain groups of characters in a locale-independent fashion.

   * Regular expressions match the leftmost longest text in the string
     being matched.  This matters for cases where you need to know the
     extent of the match, such as for text substitution and when the
     record separator is a regexp.

   * Matching expressions may use dynamic regexps (i.e., string values
     treated as regular expressions).

   * 'gawk''s 'IGNORECASE' variable lets you control the case
     sensitivity of regexp matching.  In other 'awk' versions, use
     'tolower()' or 'toupper()'.

   * Strongly typed regexp constants ('@/.../') enable certain advanced
     use cases to be described later on in the Info file.


File: gawk.info,  Node: Reading Files,  Next: Printing,  Prev: Regexp,  Up: Top

4 Reading Input Files
*********************

In the typical 'awk' program, 'awk' reads all input either from the
standard input (by default, this is the keyboard, but often it is a pipe
from another command) or from files whose names you specify on the 'awk'
command line.  If you specify input files, 'awk' reads them in order,
processing all the data from one before going on to the next.  The name
of the current input file can be found in the predefined variable
'FILENAME' (*note Built-in Variables::).

   The input is read in units called "records", and is processed by the
rules of your program one record at a time.  By default, each record is
one line.  Each record is automatically split into chunks called
"fields".  This makes it more convenient for programs to work on the
parts of a record.

   On rare occasions, you may need to use the 'getline' command.  The
'getline' command is valuable both because it can do explicit input from
any number of files, and because the files used with it do not have to
be named on the 'awk' command line (*note Getline::).

* Menu:

* Records::                     Controlling how data is split into records.
* Fields::                      An introduction to fields.
* Nonconstant Fields::          Nonconstant Field Numbers.
* Changing Fields::             Changing the Contents of a Field.
* Field Separators::            The field separator and how to change it.
* Constant Size::               Reading constant width data.
* Splitting By Content::        Defining Fields By Content
* Multiple Line::               Reading multiline records.
* Getline::                     Reading files under explicit program control
                                using the 'getline' function.
* Read Timeout::                Reading input with a timeout.
* Retrying Input::              Retrying input after certain errors.
* Command-line directories::    What happens if you put a directory on the
                                command line.
* Input Summary::               Input summary.
* Input Exercises::             Exercises.


File: gawk.info,  Node: Records,  Next: Fields,  Up: Reading Files

4.1 How Input Is Split into Records
===================================

'awk' divides the input for your program into records and fields.  It
keeps track of the number of records that have been read so far from the
current input file.  This value is stored in a predefined variable
called 'FNR', which is reset to zero every time a new file is started.
Another predefined variable, 'NR', records the total number of input
records read so far from all data files.  It starts at zero, but is
never automatically reset to zero.

* Menu:

* awk split records::           How standard 'awk' splits records.
* gawk split records::          How 'gawk' splits records.


File: gawk.info,  Node: awk split records,  Next: gawk split records,  Up: Records

4.1.1 Record Splitting with Standard 'awk'
------------------------------------------

Records are separated by a character called the "record separator".  By
default, the record separator is the newline character.  This is why
records are, by default, single lines.  To use a different character for
the record separator, simply assign that character to the predefined
variable 'RS'.

   Like any other variable, the value of 'RS' can be changed in the
'awk' program with the assignment operator, '=' (*note Assignment
Ops::).  The new record-separator character should be enclosed in
quotation marks, which indicate a string constant.  Often, the right
time to do this is at the beginning of execution, before any input is
processed, so that the very first record is read with the proper
separator.  To do this, use the special 'BEGIN' pattern (*note
BEGIN/END::).  For example:

     awk 'BEGIN { RS = "u" }
          { print $0 }' mail-list

changes the value of 'RS' to 'u', before reading any input.  The new
value is a string whose first character is the letter "u"; as a result,
records are separated by the letter "u".  Then the input file is read,
and the second rule in the 'awk' program (the action with no pattern)
prints each record.  Because each 'print' statement adds a newline at
the end of its output, this 'awk' program copies the input with each 'u'
changed to a newline.  Here are the results of running the program on
'mail-list':

     $ awk 'BEGIN { RS = "u" }
     >      { print $0 }' mail-list
     -| Amelia       555-5553     amelia.zodiac
     -| sq
     -| e@gmail.com    F
     -| Anthony      555-3412     anthony.assert
     -| ro@hotmail.com   A
     -| Becky        555-7685     becky.algebrar
     -| m@gmail.com      A
     -| Bill         555-1675     bill.drowning@hotmail.com       A
     -| Broderick    555-0542     broderick.aliq
     -| otiens@yahoo.com R
     -| Camilla      555-2912     camilla.inf
     -| sar
     -| m@skynet.be     R
     -| Fabi
     -| s       555-1234     fabi
     -| s.
     -| ndevicesim
     -| s@
     -| cb.ed
     -|     F
     -| J
     -| lie        555-6699     j
     -| lie.perscr
     -| tabor@skeeve.com   F
     -| Martin       555-6480     martin.codicib
     -| s@hotmail.com    A
     -| Sam
     -| el       555-3430     sam
     -| el.lanceolis@sh
     -| .ed
     -|         A
     -| Jean-Pa
     -| l    555-2127     jeanpa
     -| l.campanor
     -| m@ny
     -| .ed
     -|      R
     -|

Note that the entry for the name 'Bill' is not split.  In the original
data file (*note Sample Data Files::), the line looks like this:

     Bill         555-1675     bill.drowning@hotmail.com       A

It contains no 'u', so there is no reason to split the record, unlike
the others, which each have one or more occurrences of the 'u'.  In
fact, this record is treated as part of the previous record; the newline
separating them in the output is the original newline in the data file,
not the one added by 'awk' when it printed the record!

   Another way to change the record separator is on the command line,
using the variable-assignment feature (*note Other Arguments::):

     awk '{ print $0 }' RS="u" mail-list

This sets 'RS' to 'u' before processing 'mail-list'.

   Using an alphabetic character such as 'u' for the record separator is
highly likely to produce strange results.  Using an unusual character
such as '/' is more likely to produce correct behavior in the majority
of cases, but there are no guarantees.  The moral is: Know Your Data.

   When using regular characters as the record separator, there is one
unusual case that occurs when 'gawk' is being fully POSIX-compliant
(*note Options::).  Then, the following (extreme) pipeline prints a
surprising '1':

     $ echo | gawk --posix 'BEGIN { RS = "a" } ; { print NF }'
     -| 1

   There is one field, consisting of a newline.  The value of the
built-in variable 'NF' is the number of fields in the current record.
(In the normal case, 'gawk' treats the newline as whitespace, printing
'0' as the result.  Most other versions of 'awk' also act this way.)

   Reaching the end of an input file terminates the current input
record, even if the last character in the file is not the character in
'RS'.  (d.c.)

   The empty string '""' (a string without any characters) has a special
meaning as the value of 'RS'.  It means that records are separated by
one or more blank lines and nothing else.  *Note Multiple Line:: for
more details.

   If you change the value of 'RS' in the middle of an 'awk' run, the
new value is used to delimit subsequent records, but the record
currently being processed, as well as records already processed, are not
affected.

   After the end of the record has been determined, 'gawk' sets the
variable 'RT' to the text in the input that matched 'RS'.


File: gawk.info,  Node: gawk split records,  Prev: awk split records,  Up: Records

4.1.2 Record Splitting with 'gawk'
----------------------------------

When using 'gawk', the value of 'RS' is not limited to a one-character
string.  It can be any regular expression (*note Regexp::).  (c.e.)  In
general, each record ends at the next string that matches the regular
expression; the next record starts at the end of the matching string.
This general rule is actually at work in the usual case, where 'RS'
contains just a newline: a record ends at the beginning of the next
matching string (the next newline in the input), and the following
record starts just after the end of this string (at the first character
of the following line).  The newline, because it matches 'RS', is not
part of either record.

   When 'RS' is a single character, 'RT' contains the same single
character.  However, when 'RS' is a regular expression, 'RT' contains
the actual input text that matched the regular expression.

   If the input file ends without any text matching 'RS', 'gawk' sets
'RT' to the null string.

   The following example illustrates both of these features.  It sets
'RS' equal to a regular expression that matches either a newline or a
series of one or more uppercase letters with optional leading and/or
trailing whitespace:

     $ echo record 1 AAAA record 2 BBBB record 3 |
     > gawk 'BEGIN { RS = "\n|( *[[:upper:]]+ *)" }
     >             { print "Record =", $0,"and RT = [" RT "]" }'
     -| Record = record 1 and RT = [ AAAA ]
     -| Record = record 2 and RT = [ BBBB ]
     -| Record = record 3 and RT = [
     -| ]

The square brackets delineate the contents of 'RT', letting you see the
leading and trailing whitespace.  The final value of 'RT' is a newline.
*Note Simple Sed:: for a more useful example of 'RS' as a regexp and
'RT'.

   If you set 'RS' to a regular expression that allows optional trailing
text, such as 'RS = "abc(XYZ)?"', it is possible, due to implementation
constraints, that 'gawk' may match the leading part of the regular
expression, but not the trailing part, particularly if the input text
that could match the trailing part is fairly long.  'gawk' attempts to
avoid this problem, but currently, there's no guarantee that this will
never happen.

     NOTE: Remember that in 'awk', the '^' and '$' anchor metacharacters
     match the beginning and end of a _string_, and not the beginning
     and end of a _line_.  As a result, something like 'RS =
     "^[[:upper:]]"' can only match at the beginning of a file.  This is
     because 'gawk' views the input file as one long string that happens
     to contain newline characters.  It is thus best to avoid anchor
     metacharacters in the value of 'RS'.

   The use of 'RS' as a regular expression and the 'RT' variable are
'gawk' extensions; they are not available in compatibility mode (*note
Options::).  In compatibility mode, only the first character of the
value of 'RS' determines the end of the record.

                      'RS = "\0"' Is Not Portable

   There are times when you might want to treat an entire data file as a
single record.  The only way to make this happen is to give 'RS' a value
that you know doesn't occur in the input file.  This is hard to do in a
general way, such that a program always works for arbitrary input files.

   You might think that for text files, the NUL character, which
consists of a character with all bits equal to zero, is a good value to
use for 'RS' in this case:

     BEGIN { RS = "\0" }  # whole file becomes one record?

   'gawk' in fact accepts this, and uses the NUL character for the
record separator.  This works for certain special files, such as
'/proc/environ' on GNU/Linux systems, where the NUL character is in fact
the record separator.  However, this usage is _not_ portable to most
other 'awk' implementations.

   Almost all other 'awk' implementations(1) store strings internally as
C-style strings.  C strings use the NUL character as the string
terminator.  In effect, this means that 'RS = "\0"' is the same as 'RS =
""'.  (d.c.)

   It happens that recent versions of 'mawk' can use the NUL character
as a record separator.  However, this is a special case: 'mawk' does not
allow embedded NUL characters in strings.  (This may change in a future
version of 'mawk'.)

   *Note Readfile Function:: for an interesting way to read whole files.
If you are using 'gawk', see *note Extension Sample Readfile:: for
another option.

   ---------- Footnotes ----------

   (1) At least that we know about.


File: gawk.info,  Node: Fields,  Next: Nonconstant Fields,  Prev: Records,  Up: Reading Files

4.2 Examining Fields
====================

When 'awk' reads an input record, the record is automatically "parsed"
or separated by the 'awk' utility into chunks called "fields".  By
default, fields are separated by "whitespace", like words in a line.
Whitespace in 'awk' means any string of one or more spaces, TABs, or
newlines; other characters that are considered whitespace by other
languages (such as formfeed, vertical tab, etc.)  are _not_ considered
whitespace by 'awk'.

   The purpose of fields is to make it more convenient for you to refer
to these pieces of the record.  You don't have to use them--you can
operate on the whole record if you want--but fields are what make simple
'awk' programs so powerful.

   You use a dollar sign ('$') to refer to a field in an 'awk' program,
followed by the number of the field you want.  Thus, '$1' refers to the
first field, '$2' to the second, and so on.  (Unlike in the Unix shells,
the field numbers are not limited to single digits.  '$127' is the 127th
field in the record.)  For example, suppose the following is a line of
input:

     This seems like a pretty nice example.

Here the first field, or '$1', is 'This', the second field, or '$2', is
'seems', and so on.  Note that the last field, '$7', is 'example.'.
Because there is no space between the 'e' and the '.', the period is
considered part of the seventh field.

   'NF' is a predefined variable whose value is the number of fields in
the current record.  'awk' automatically updates the value of 'NF' each
time it reads a record.  No matter how many fields there are, the last
field in a record can be represented by '$NF'.  So, '$NF' is the same as
'$7', which is 'example.'.  If you try to reference a field beyond the
last one (such as '$8' when the record has only seven fields), you get
the empty string.  (If used in a numeric operation, you get zero.)

   The use of '$0', which looks like a reference to the "zeroth" field,
is a special case: it represents the whole input record.  Use it when
you are not interested in specific fields.  Here are some more examples:

     $ awk '$1 ~ /li/ { print $0 }' mail-list
     -| Amelia       555-5553     amelia.zodiacusque@gmail.com    F
     -| Julie        555-6699     julie.perscrutabor@skeeve.com   F

This example prints each record in the file 'mail-list' whose first
field contains the string 'li'.

   By contrast, the following example looks for 'li' in _the entire
record_ and prints the first and last fields for each matching input
record:

     $ awk '/li/ { print $1, $NF }' mail-list
     -| Amelia F
     -| Broderick R
     -| Julie F
     -| Samuel A


File: gawk.info,  Node: Nonconstant Fields,  Next: Changing Fields,  Prev: Fields,  Up: Reading Files

4.3 Nonconstant Field Numbers
=============================

A field number need not be a constant.  Any expression in the 'awk'
language can be used after a '$' to refer to a field.  The value of the
expression specifies the field number.  If the value is a string, rather
than a number, it is converted to a number.  Consider this example:

     awk '{ print $NR }'

Recall that 'NR' is the number of records read so far: one in the first
record, two in the second, and so on.  So this example prints the first
field of the first record, the second field of the second record, and so
on.  For the twentieth record, field number 20 is printed; most likely,
the record has fewer than 20 fields, so this prints a blank line.  Here
is another example of using expressions as field numbers:

     awk '{ print $(2*2) }' mail-list

   'awk' evaluates the expression '(2*2)' and uses its value as the
number of the field to print.  The '*' represents multiplication, so the
expression '2*2' evaluates to four.  The parentheses are used so that
the multiplication is done before the '$' operation; they are necessary
whenever there is a binary operator(1) in the field-number expression.
This example, then, prints the type of relationship (the fourth field)
for every line of the file 'mail-list'.  (All of the 'awk' operators are
listed, in order of decreasing precedence, in *note Precedence::.)

   If the field number you compute is zero, you get the entire record.
Thus, '$(2-2)' has the same value as '$0'.  Negative field numbers are
not allowed; trying to reference one usually terminates the program.
(The POSIX standard does not define what happens when you reference a
negative field number.  'gawk' notices this and terminates your program.
Other 'awk' implementations may behave differently.)

   As mentioned in *note Fields::, 'awk' stores the current record's
number of fields in the built-in variable 'NF' (also *note Built-in
Variables::).  Thus, the expression '$NF' is not a special feature--it
is the direct consequence of evaluating 'NF' and using its value as a
field number.

   ---------- Footnotes ----------

   (1) A "binary operator", such as '*' for multiplication, is one that
takes two operands.  The distinction is required because 'awk' also has
unary (one-operand) and ternary (three-operand) operators.


File: gawk.info,  Node: Changing Fields,  Next: Field Separators,  Prev: Nonconstant Fields,  Up: Reading Files

4.4 Changing the Contents of a Field
====================================

The contents of a field, as seen by 'awk', can be changed within an
'awk' program; this changes what 'awk' perceives as the current input
record.  (The actual input is untouched; 'awk' _never_ modifies the
input file.)  Consider the following example and its output:

     $ awk '{ nboxes = $3 ; $3 = $3 - 10
     >        print nboxes, $3 }' inventory-shipped
     -| 25 15
     -| 32 22
     -| 24 14
     ...

The program first saves the original value of field three in the
variable 'nboxes'.  The '-' sign represents subtraction, so this program
reassigns field three, '$3', as the original value of field three minus
ten: '$3 - 10'.  (*Note Arithmetic Ops::.)  Then it prints the original
and new values for field three.  (Someone in the warehouse made a
consistent mistake while inventorying the red boxes.)

   For this to work, the text in '$3' must make sense as a number; the
string of characters must be converted to a number for the computer to
do arithmetic on it.  The number resulting from the subtraction is
converted back to a string of characters that then becomes field three.
*Note Conversion::.

   When the value of a field is changed (as perceived by 'awk'), the
text of the input record is recalculated to contain the new field where
the old one was.  In other words, '$0' changes to reflect the altered
field.  Thus, this program prints a copy of the input file, with 10
subtracted from the second field of each line:

     $ awk '{ $2 = $2 - 10; print $0 }' inventory-shipped
     -| Jan 3 25 15 115
     -| Feb 5 32 24 226
     -| Mar 5 24 34 228
     ...

   It is also possible to assign contents to fields that are out of
range.  For example:

     $ awk '{ $6 = ($5 + $4 + $3 + $2)
     >        print $6 }' inventory-shipped
     -| 168
     -| 297
     -| 301
     ...

We've just created '$6', whose value is the sum of fields '$2', '$3',
'$4', and '$5'.  The '+' sign represents addition.  For the file
'inventory-shipped', '$6' represents the total number of parcels shipped
for a particular month.

   Creating a new field changes 'awk''s internal copy of the current
input record, which is the value of '$0'.  Thus, if you do 'print $0'
after adding a field, the record printed includes the new field, with
the appropriate number of field separators between it and the previously
existing fields.

   This recomputation affects and is affected by 'NF' (the number of
fields; *note Fields::).  For example, the value of 'NF' is set to the
number of the highest field you create.  The exact format of '$0' is
also affected by a feature that has not been discussed yet: the "output
field separator", 'OFS', used to separate the fields (*note Output
Separators::).

   Note, however, that merely _referencing_ an out-of-range field does
_not_ change the value of either '$0' or 'NF'.  Referencing an
out-of-range field only produces an empty string.  For example:

     if ($(NF+1) != "")
         print "can't happen"
     else
         print "everything is normal"

should print 'everything is normal', because 'NF+1' is certain to be out
of range.  (*Note If Statement:: for more information about 'awk''s
'if-else' statements.  *Note Typing and Comparison:: for more
information about the '!=' operator.)

   It is important to note that making an assignment to an existing
field changes the value of '$0' but does not change the value of 'NF',
even when you assign the empty string to a field.  For example:

     $ echo a b c d | awk '{ OFS = ":"; $2 = ""
     >                       print $0; print NF }'
     -| a::c:d
     -| 4

The field is still there; it just has an empty value, delimited by the
two colons between 'a' and 'c'.  This example shows what happens if you
create a new field:

     $ echo a b c d | awk '{ OFS = ":"; $2 = ""; $6 = "new"
     >                       print $0; print NF }'
     -| a::c:d::new
     -| 6

The intervening field, '$5', is created with an empty value (indicated
by the second pair of adjacent colons), and 'NF' is updated with the
value six.

   Decrementing 'NF' throws away the values of the fields after the new
value of 'NF' and recomputes '$0'.  (d.c.)  Here is an example:

     $ echo a b c d e f | awk '{ print "NF =", NF;
     >                           NF = 3; print $0 }'
     -| NF = 6
     -| a b c

     CAUTION: Some versions of 'awk' don't rebuild '$0' when 'NF' is
     decremented.

   Finally, there are times when it is convenient to force 'awk' to
rebuild the entire record, using the current values of the fields and
'OFS'.  To do this, use the seemingly innocuous assignment:

     $1 = $1   # force record to be reconstituted
     print $0  # or whatever else with $0

This forces 'awk' to rebuild the record.  It does help to add a comment,
as we've shown here.

   There is a flip side to the relationship between '$0' and the fields.
Any assignment to '$0' causes the record to be reparsed into fields
using the _current_ value of 'FS'.  This also applies to any built-in
function that updates '$0', such as 'sub()' and 'gsub()' (*note String
Functions::).

                          Understanding '$0'

   It is important to remember that '$0' is the _full_ record, exactly
as it was read from the input.  This includes any leading or trailing
whitespace, and the exact whitespace (or other characters) that
separates the fields.

   It is a common error to try to change the field separators in a
record simply by setting 'FS' and 'OFS', and then expecting a plain
'print' or 'print $0' to print the modified record.

   But this does not work, because nothing was done to change the record
itself.  Instead, you must force the record to be rebuilt, typically
with a statement such as '$1 = $1', as described earlier.


File: gawk.info,  Node: Field Separators,  Next: Constant Size,  Prev: Changing Fields,  Up: Reading Files

4.5 Specifying How Fields Are Separated
=======================================

* Menu:

* Default Field Splitting::      How fields are normally separated.
* Regexp Field Splitting::       Using regexps as the field separator.
* Single Character Fields::      Making each character a separate field.
* Command Line Field Separator:: Setting 'FS' from the command line.
* Full Line Fields::             Making the full line be a single field.
* Field Splitting Summary::      Some final points and a summary table.

The "field separator", which is either a single character or a regular
expression, controls the way 'awk' splits an input record into fields.
'awk' scans the input record for character sequences that match the
separator; the fields themselves are the text between the matches.

   In the examples that follow, we use the bullet symbol (*) to
represent spaces in the output.  If the field separator is 'oo', then
the following line:

     moo goo gai pan

is split into three fields: 'm', '*g', and '*gai*pan'.  Note the leading
spaces in the values of the second and third fields.

   The field separator is represented by the predefined variable 'FS'.
Shell programmers take note: 'awk' does _not_ use the name 'IFS' that is
used by the POSIX-compliant shells (such as the Unix Bourne shell, 'sh',
or Bash).

   The value of 'FS' can be changed in the 'awk' program with the
assignment operator, '=' (*note Assignment Ops::).  Often, the right
time to do this is at the beginning of execution before any input has
been processed, so that the very first record is read with the proper
separator.  To do this, use the special 'BEGIN' pattern (*note
BEGIN/END::).  For example, here we set the value of 'FS' to the string
'","':

     awk 'BEGIN { FS = "," } ; { print $2 }'

Given the input line:

     John Q. Smith, 29 Oak St., Walamazoo, MI 42139

this 'awk' program extracts and prints the string '*29*Oak*St.'.

   Sometimes the input data contains separator characters that don't
separate fields the way you thought they would.  For instance, the
person's name in the example we just used might have a title or suffix
attached, such as:

     John Q. Smith, LXIX, 29 Oak St., Walamazoo, MI 42139

The same program would extract '*LXIX' instead of '*29*Oak*St.'.  If you
were expecting the program to print the address, you would be surprised.
The moral is to choose your data layout and separator characters
carefully to prevent such problems.  (If the data is not in a form that
is easy to process, perhaps you can massage it first with a separate
'awk' program.)


File: gawk.info,  Node: Default Field Splitting,  Next: Regexp Field Splitting,  Up: Field Separators

4.5.1 Whitespace Normally Separates Fields
------------------------------------------

Fields are normally separated by whitespace sequences (spaces, TABs, and
newlines), not by single spaces.  Two spaces in a row do not delimit an
empty field.  The default value of the field separator 'FS' is a string
containing a single space, '" "'.  If 'awk' interpreted this value in
the usual way, each space character would separate fields, so two spaces
in a row would make an empty field between them.  The reason this does
not happen is that a single space as the value of 'FS' is a special
case--it is taken to specify the default manner of delimiting fields.

   If 'FS' is any other single character, such as '","', then each
occurrence of that character separates two fields.  Two consecutive
occurrences delimit an empty field.  If the character occurs at the
beginning or the end of the line, that too delimits an empty field.  The
space character is the only single character that does not follow these
rules.


File: gawk.info,  Node: Regexp Field Splitting,  Next: Single Character Fields,  Prev: Default Field Splitting,  Up: Field Separators

4.5.2 Using Regular Expressions to Separate Fields
--------------------------------------------------

The previous node discussed the use of single characters or simple
strings as the value of 'FS'.  More generally, the value of 'FS' may be
a string containing any regular expression.  In this case, each match in
the record for the regular expression separates fields.  For example,
the assignment:

     FS = ", \t"

makes every area of an input line that consists of a comma followed by a
space and a TAB into a field separator.  ('\t' is an "escape sequence"
that stands for a TAB; *note Escape Sequences::, for the complete list
of similar escape sequences.)

   For a less trivial example of a regular expression, try using single
spaces to separate fields the way single commas are used.  'FS' can be
set to '"[ ]"' (left bracket, space, right bracket).  This regular
expression matches a single space and nothing else (*note Regexp::).

   There is an important difference between the two cases of 'FS = " "'
(a single space) and 'FS = "[ \t\n]+"' (a regular expression matching
one or more spaces, TABs, or newlines).  For both values of 'FS', fields
are separated by "runs" (multiple adjacent occurrences) of spaces, TABs,
and/or newlines.  However, when the value of 'FS' is '" "', 'awk' first
strips leading and trailing whitespace from the record and then decides
where the fields are.  For example, the following pipeline prints 'b':

     $ echo ' a b c d ' | awk '{ print $2 }'
     -| b

However, this pipeline prints 'a' (note the extra spaces around each
letter):

     $ echo ' a  b  c  d ' | awk 'BEGIN { FS = "[ \t\n]+" }
     >                                  { print $2 }'
     -| a

In this case, the first field is null, or empty.

   The stripping of leading and trailing whitespace also comes into play
whenever '$0' is recomputed.  For instance, study this pipeline:

     $ echo '   a b c d' | awk '{ print; $2 = $2; print }'
     -|    a b c d
     -| a b c d

The first 'print' statement prints the record as it was read, with
leading whitespace intact.  The assignment to '$2' rebuilds '$0' by
concatenating '$1' through '$NF' together, separated by the value of
'OFS' (which is a space by default).  Because the leading whitespace was
ignored when finding '$1', it is not part of the new '$0'.  Finally, the
last 'print' statement prints the new '$0'.

   There is an additional subtlety to be aware of when using regular
expressions for field splitting.  It is not well specified in the POSIX
standard, or anywhere else, what '^' means when splitting fields.  Does
the '^' match only at the beginning of the entire record?  Or is each
field separator a new string?  It turns out that different 'awk'
versions answer this question differently, and you should not rely on
any specific behavior in your programs.  (d.c.)

   As a point of information, BWK 'awk' allows '^' to match only at the
beginning of the record.  'gawk' also works this way.  For example:

     $ echo 'xxAA  xxBxx  C' |
     > gawk -F '(^x+)|( +)' '{ for (i = 1; i <= NF; i++)
     >                             printf "-->%s<--\n", $i }'
     -| --><--
     -| -->AA<--
     -| -->xxBxx<--
     -| -->C<--


File: gawk.info,  Node: Single Character Fields,  Next: Command Line Field Separator,  Prev: Regexp Field Splitting,  Up: Field Separators

4.5.3 Making Each Character a Separate Field
--------------------------------------------

There are times when you may want to examine each character of a record
separately.  This can be done in 'gawk' by simply assigning the null
string ('""') to 'FS'.  (c.e.)  In this case, each individual character
in the record becomes a separate field.  For example:

     $ echo a b | gawk 'BEGIN { FS = "" }
     >                  {
     >                      for (i = 1; i <= NF; i = i + 1)
     >                          print "Field", i, "is", $i
     >                  }'
     -| Field 1 is a
     -| Field 2 is
     -| Field 3 is b

   Traditionally, the behavior of 'FS' equal to '""' was not defined.
In this case, most versions of Unix 'awk' simply treat the entire record
as only having one field.  (d.c.)  In compatibility mode (*note
Options::), if 'FS' is the null string, then 'gawk' also behaves this
way.


File: gawk.info,  Node: Command Line Field Separator,  Next: Full Line Fields,  Prev: Single Character Fields,  Up: Field Separators

4.5.4 Setting 'FS' from the Command Line
----------------------------------------

'FS' can be set on the command line.  Use the '-F' option to do so.  For
example:

     awk -F, 'PROGRAM' INPUT-FILES

sets 'FS' to the ',' character.  Notice that the option uses an
uppercase 'F' instead of a lowercase 'f'.  The latter option ('-f')
specifies a file containing an 'awk' program.

   The value used for the argument to '-F' is processed in exactly the
same way as assignments to the predefined variable 'FS'.  Any special
characters in the field separator must be escaped appropriately.  For
example, to use a '\' as the field separator on the command line, you
would have to type:

     # same as FS = "\\"
     awk -F\\\\ '...' files ...

Because '\' is used for quoting in the shell, 'awk' sees '-F\\'.  Then
'awk' processes the '\\' for escape characters (*note Escape
Sequences::), finally yielding a single '\' to use for the field
separator.

   As a special case, in compatibility mode (*note Options::), if the
argument to '-F' is 't', then 'FS' is set to the TAB character.  If you
type '-F\t' at the shell, without any quotes, the '\' gets deleted, so
'awk' figures that you really want your fields to be separated with TABs
and not 't's.  Use '-v FS="t"' or '-F"[t]"' on the command line if you
really do want to separate your fields with 't's.  Use '-F '\t'' when
not in compatibility mode to specify that TABs separate fields.

   As an example, let's use an 'awk' program file called 'edu.awk' that
contains the pattern '/edu/' and the action 'print $1':

     /edu/   { print $1 }

   Let's also set 'FS' to be the '-' character and run the program on
the file 'mail-list'.  The following command prints a list of the names
of the people that work at or attend a university, and the first three
digits of their phone numbers:

     $ awk -F- -f edu.awk mail-list
     -| Fabius       555
     -| Samuel       555
     -| Jean

Note the third line of output.  The third line in the original file
looked like this:

     Jean-Paul    555-2127     jeanpaul.campanorum@nyu.edu     R

   The '-' as part of the person's name was used as the field separator,
instead of the '-' in the phone number that was originally intended.
This demonstrates why you have to be careful in choosing your field and
record separators.

   Perhaps the most common use of a single character as the field
separator occurs when processing the Unix system password file.  On many
Unix systems, each user has a separate entry in the system password
file, with one line per user.  The information in these lines is
separated by colons.  The first field is the user's login name and the
second is the user's encrypted or shadow password.  (A shadow password
is indicated by the presence of a single 'x' in the second field.)  A
password file entry might look like this:

     arnold:x:2076:10:Arnold Robbins:/home/arnold:/bin/bash

   The following program searches the system password file and prints
the entries for users whose full name is not indicated:

     awk -F: '$5 == ""' /etc/passwd


File: gawk.info,  Node: Full Line Fields,  Next: Field Splitting Summary,  Prev: Command Line Field Separator,  Up: Field Separators

4.5.5 Making the Full Line Be a Single Field
--------------------------------------------

Occasionally, it's useful to treat the whole input line as a single
field.  This can be done easily and portably simply by setting 'FS' to
'"\n"' (a newline):(1)

     awk -F'\n' 'PROGRAM' FILES ...

When you do this, '$1' is the same as '$0'.

               Changing 'FS' Does Not Affect the Fields

   According to the POSIX standard, 'awk' is supposed to behave as if
each record is split into fields at the time it is read.  In particular,
this means that if you change the value of 'FS' after a record is read,
the values of the fields (i.e., how they were split) should reflect the
old value of 'FS', not the new one.

   However, many older implementations of 'awk' do not work this way.
Instead, they defer splitting the fields until a field is actually
referenced.  The fields are split using the _current_ value of 'FS'!
(d.c.)  This behavior can be difficult to diagnose.  The following
example illustrates the difference between the two methods:

     sed 1q /etc/passwd | awk '{ FS = ":" ; print $1 }'

which usually prints:

     root

on an incorrect implementation of 'awk', while 'gawk' prints the full
first line of the file, something like:

     root:x:0:0:Root:/:

   (The 'sed'(2) command prints just the first line of '/etc/passwd'.)

   ---------- Footnotes ----------

   (1) Thanks to Andrew Schorr for this tip.

   (2) The 'sed' utility is a "stream editor."  Its behavior is also
defined by the POSIX standard.


File: gawk.info,  Node: Field Splitting Summary,  Prev: Full Line Fields,  Up: Field Separators

4.5.6 Field-Splitting Summary
-----------------------------

It is important to remember that when you assign a string constant as
the value of 'FS', it undergoes normal 'awk' string processing.  For
example, with Unix 'awk' and 'gawk', the assignment 'FS = "\.."' assigns
the character string '".."' to 'FS' (the backslash is stripped).  This
creates a regexp meaning "fields are separated by occurrences of any two
characters."  If instead you want fields to be separated by a literal
period followed by any single character, use 'FS = "\\.."'.

   The following list summarizes how fields are split, based on the
value of 'FS' ('==' means "is equal to"):

'FS == " "'
     Fields are separated by runs of whitespace.  Leading and trailing
     whitespace are ignored.  This is the default.

'FS == ANY OTHER SINGLE CHARACTER'
     Fields are separated by each occurrence of the character.  Multiple
     successive occurrences delimit empty fields, as do leading and
     trailing occurrences.  The character can even be a regexp
     metacharacter; it does not need to be escaped.

'FS == REGEXP'
     Fields are separated by occurrences of characters that match
     REGEXP.  Leading and trailing matches of REGEXP delimit empty
     fields.

'FS == ""'
     Each individual character in the record becomes a separate field.
     (This is a common extension; it is not specified by the POSIX
     standard.)

                         'FS' and 'IGNORECASE'

   The 'IGNORECASE' variable (*note User-modified::) affects field
splitting _only_ when the value of 'FS' is a regexp.  It has no effect
when 'FS' is a single character, even if that character is a letter.
Thus, in the following code:

     FS = "c"
     IGNORECASE = 1
     $0 = "aCa"
     print $1

The output is 'aCa'.  If you really want to split fields on an
alphabetic character while ignoring case, use a regexp that will do it
for you (e.g., 'FS = "[c]"').  In this case, 'IGNORECASE' will take
effect.


File: gawk.info,  Node: Constant Size,  Next: Splitting By Content,  Prev: Field Separators,  Up: Reading Files

4.6 Reading Fixed-Width Data
============================

This minor node discusses an advanced feature of 'gawk'.  If you are a
novice 'awk' user, you might want to skip it on the first reading.

   'gawk' provides a facility for dealing with fixed-width fields with
no distinctive field separator.  For example, data of this nature arises
in the input for old Fortran programs where numbers are run together, or
in the output of programs that did not anticipate the use of their
output as input for other programs.

   An example of the latter is a table where all the columns are lined
up by the use of a variable number of spaces and _empty fields are just
spaces_.  Clearly, 'awk''s normal field splitting based on 'FS' does not
work well in this case.  Although a portable 'awk' program can use a
series of 'substr()' calls on '$0' (*note String Functions::), this is
awkward and inefficient for a large number of fields.

   The splitting of an input record into fixed-width fields is specified
by assigning a string containing space-separated numbers to the built-in
variable 'FIELDWIDTHS'.  Each number specifies the width of the field,
_including_ columns between fields.  If you want to ignore the columns
between fields, you can specify the width as a separate field that is
subsequently ignored.  It is a fatal error to supply a field width that
has a negative value.  The following data is the output of the Unix 'w'
utility.  It is useful to illustrate the use of 'FIELDWIDTHS':

      10:06pm  up 21 days, 14:04,  23 users
     User     tty       login  idle   JCPU   PCPU  what
     hzuo     ttyV0     8:58pm            9      5  vi p24.tex
     hzang    ttyV3     6:37pm    50                -csh
     eklye    ttyV5     9:53pm            7      1  em thes.tex
     dportein ttyV6     8:17pm  1:47                -csh
     gierd    ttyD3    10:00pm     1                elm
     dave     ttyD4     9:47pm            4      4  w
     brent    ttyp0    26Jun91  4:46  26:46   4:41  bash
     dave     ttyq4    26Jun9115days     46     46  wnewmail

   The following program takes this input, converts the idle time to
number of seconds, and prints out the first two fields and the
calculated idle time:

     BEGIN  { FIELDWIDTHS = "9 6 10 6 7 7 35" }
     NR > 2 {
         idle = $4
         sub(/^ +/, "", idle)   # strip leading spaces
         if (idle == "")
             idle = 0
         if (idle ~ /:/) {
             split(idle, t, ":")
             idle = t[1] * 60 + t[2]
         }
         if (idle ~ /days/)
             idle *= 24 * 60 * 60

         print $1, $2, idle
     }

     NOTE: The preceding program uses a number of 'awk' features that
     haven't been introduced yet.

   Running the program on the data produces the following results:

     hzuo      ttyV0  0
     hzang     ttyV3  50
     eklye     ttyV5  0
     dportein  ttyV6  107
     gierd     ttyD3  1
     dave      ttyD4  0
     brent     ttyp0  286
     dave      ttyq4  1296000

   Another (possibly more practical) example of fixed-width input data
is the input from a deck of balloting cards.  In some parts of the
United States, voters mark their choices by punching holes in computer
cards.  These cards are then processed to count the votes for any
particular candidate or on any particular issue.  Because a voter may
choose not to vote on some issue, any column on the card may be empty.
An 'awk' program for processing such data could use the 'FIELDWIDTHS'
feature to simplify reading the data.  (Of course, getting 'gawk' to run
on a system with card readers is another story!)

   Assigning a value to 'FS' causes 'gawk' to use 'FS' for field
splitting again.  Use 'FS = FS' to make this happen, without having to
know the current value of 'FS'.  In order to tell which kind of field
splitting is in effect, use 'PROCINFO["FS"]' (*note Auto-set::).  The
value is '"FS"' if regular field splitting is being used, or
'"FIELDWIDTHS"' if fixed-width field splitting is being used:

     if (PROCINFO["FS"] == "FS")
         REGULAR FIELD SPLITTING ...
     else if  (PROCINFO["FS"] == "FIELDWIDTHS")
         FIXED-WIDTH FIELD SPLITTING ...
     else
         CONTENT-BASED FIELD SPLITTING ... (see next minor node)

   This information is useful when writing a function that needs to
temporarily change 'FS' or 'FIELDWIDTHS', read some records, and then
restore the original settings (*note Passwd Functions:: for an example
of such a function).


File: gawk.info,  Node: Splitting By Content,  Next: Multiple Line,  Prev: Constant Size,  Up: Reading Files

4.7 Defining Fields by Content
==============================

This minor node discusses an advanced feature of 'gawk'.  If you are a
novice 'awk' user, you might want to skip it on the first reading.

   Normally, when using 'FS', 'gawk' defines the fields as the parts of
the record that occur in between each field separator.  In other words,
'FS' defines what a field _is not_, instead of what a field _is_.
However, there are times when you really want to define the fields by
what they are, and not by what they are not.

   The most notorious such case is so-called "comma-separated values"
(CSV) data.  Many spreadsheet programs, for example, can export their
data into text files, where each record is terminated with a newline,
and fields are separated by commas.  If commas only separated the data,
there wouldn't be an issue.  The problem comes when one of the fields
contains an _embedded_ comma.  In such cases, most programs embed the
field in double quotes.(1)  So, we might have data like this:

     Robbins,Arnold,"1234 A Pretty Street, NE",MyTown,MyState,12345-6789,USA

   The 'FPAT' variable offers a solution for cases like this.  The value
of 'FPAT' should be a string that provides a regular expression.  This
regular expression describes the contents of each field.

   In the case of CSV data as presented here, each field is either
"anything that is not a comma," or "a double quote, anything that is not
a double quote, and a closing double quote."  If written as a regular
expression constant (*note Regexp::), we would have
'/([^,]+)|("[^"]+")/'.  Writing this as a string requires us to escape
the double quotes, leading to:

     FPAT = "([^,]+)|(\"[^\"]+\")"

   Putting this to use, here is a simple program to parse the data:

     BEGIN {
         FPAT = "([^,]+)|(\"[^\"]+\")"
     }

     {
         print "NF = ", NF
         for (i = 1; i <= NF; i++) {
             printf("$%d = <%s>\n", i, $i)
         }
     }

   When run, we get the following:

     $ gawk -f simple-csv.awk addresses.csv
     NF =  7
     $1 = <Robbins>
     $2 = <Arnold>
     $3 = <"1234 A Pretty Street, NE">
     $4 = <MyTown>
     $5 = <MyState>
     $6 = <12345-6789>
     $7 = <USA>

   Note the embedded comma in the value of '$3'.

   A straightforward improvement when processing CSV data of this sort
would be to remove the quotes when they occur, with something like this:

     if (substr($i, 1, 1) == "\"") {
         len = length($i)
         $i = substr($i, 2, len - 2)    # Get text within the two quotes
     }

   As with 'FS', the 'IGNORECASE' variable (*note User-modified::)
affects field splitting with 'FPAT'.

   Assigning a value to 'FPAT' overrides field splitting with 'FS' and
with 'FIELDWIDTHS'.  Similar to 'FIELDWIDTHS', the value of
'PROCINFO["FS"]' will be '"FPAT"' if content-based field splitting is
being used.

     NOTE: Some programs export CSV data that contains embedded newlines
     between the double quotes.  'gawk' provides no way to deal with
     this.  Even though a formal specification for CSV data exists,
     there isn't much more to be done; the 'FPAT' mechanism provides an
     elegant solution for the majority of cases, and the 'gawk'
     developers are satisfied with that.

   As written, the regexp used for 'FPAT' requires that each field
contain at least one character.  A straightforward modification
(changing the first '+' to '*') allows fields to be empty:

     FPAT = "([^,]*)|(\"[^\"]+\")"

   Finally, the 'patsplit()' function makes the same functionality
available for splitting regular strings (*note String Functions::).

   To recap, 'gawk' provides three independent methods to split input
records into fields.  The mechanism used is based on which of the three
variables--'FS', 'FIELDWIDTHS', or 'FPAT'--was last assigned to.

   ---------- Footnotes ----------

   (1) The CSV format lacked a formal standard definition for many
years.  RFC 4180 (http://www.ietf.org/rfc/rfc4180.txt) standardizes the
most common practices.


File: gawk.info,  Node: Multiple Line,  Next: Getline,  Prev: Splitting By Content,  Up: Reading Files

4.8 Multiple-Line Records
=========================

In some databases, a single line cannot conveniently hold all the
information in one entry.  In such cases, you can use multiline records.
The first step in doing this is to choose your data format.

   One technique is to use an unusual character or string to separate
records.  For example, you could use the formfeed character (written
'\f' in 'awk', as in C) to separate them, making each record a page of
the file.  To do this, just set the variable 'RS' to '"\f"' (a string
containing the formfeed character).  Any other character could equally
well be used, as long as it won't be part of the data in a record.

   Another technique is to have blank lines separate records.  By a
special dispensation, an empty string as the value of 'RS' indicates
that records are separated by one or more blank lines.  When 'RS' is set
to the empty string, each record always ends at the first blank line
encountered.  The next record doesn't start until the first nonblank
line that follows.  No matter how many blank lines appear in a row, they
all act as one record separator.  (Blank lines must be completely empty;
lines that contain only whitespace do not count.)

   You can achieve the same effect as 'RS = ""' by assigning the string
'"\n\n+"' to 'RS'.  This regexp matches the newline at the end of the
record and one or more blank lines after the record.  In addition, a
regular expression always matches the longest possible sequence when
there is a choice (*note Leftmost Longest::).  So, the next record
doesn't start until the first nonblank line that follows--no matter how
many blank lines appear in a row, they are considered one record
separator.

   However, there is an important difference between 'RS = ""' and 'RS =
"\n\n+"'.  In the first case, leading newlines in the input data file
are ignored, and if a file ends without extra blank lines after the last
record, the final newline is removed from the record.  In the second
case, this special processing is not done.  (d.c.)

   Now that the input is separated into records, the second step is to
separate the fields in the records.  One way to do this is to divide
each of the lines into fields in the normal manner.  This happens by
default as the result of a special feature.  When 'RS' is set to the
empty string _and_ 'FS' is set to a single character, the newline
character _always_ acts as a field separator.  This is in addition to
whatever field separations result from 'FS'.(1)

   The original motivation for this special exception was probably to
provide useful behavior in the default case (i.e., 'FS' is equal to
'" "').  This feature can be a problem if you really don't want the
newline character to separate fields, because there is no way to prevent
it.  However, you can work around this by using the 'split()' function
to break up the record manually (*note String Functions::).  If you have
a single-character field separator, you can work around the special
feature in a different way, by making 'FS' into a regexp for that single
character.  For example, if the field separator is a percent character,
instead of 'FS = "%"', use 'FS = "[%]"'.

   Another way to separate fields is to put each field on a separate
line: to do this, just set the variable 'FS' to the string '"\n"'.
(This single-character separator matches a single newline.)  A practical
example of a data file organized this way might be a mailing list, where
blank lines separate the entries.  Consider a mailing list in a file
named 'addresses', which looks like this:

     Jane Doe
     123 Main Street
     Anywhere, SE 12345-6789

     John Smith
     456 Tree-lined Avenue
     Smallville, MW 98765-4321
     ...

A simple program to process this file is as follows:

     # addrs.awk --- simple mailing list program

     # Records are separated by blank lines.
     # Each line is one field.
     BEGIN { RS = "" ; FS = "\n" }

     {
           print "Name is:", $1
           print "Address is:", $2
           print "City and State are:", $3
           print ""
     }

   Running the program produces the following output:

     $ awk -f addrs.awk addresses
     -| Name is: Jane Doe
     -| Address is: 123 Main Street
     -| City and State are: Anywhere, SE 12345-6789
     -|
     -| Name is: John Smith
     -| Address is: 456 Tree-lined Avenue
     -| City and State are: Smallville, MW 98765-4321
     -|
     ...

   *Note Labels Program:: for a more realistic program dealing with
address lists.  The following list summarizes how records are split,
based on the value of 'RS'.  ('==' means "is equal to.")

'RS == "\n"'
     Records are separated by the newline character ('\n').  In effect,
     every line in the data file is a separate record, including blank
     lines.  This is the default.

'RS == ANY SINGLE CHARACTER'
     Records are separated by each occurrence of the character.
     Multiple successive occurrences delimit empty records.

'RS == ""'
     Records are separated by runs of blank lines.  When 'FS' is a
     single character, then the newline character always serves as a
     field separator, in addition to whatever value 'FS' may have.
     Leading and trailing newlines in a file are ignored.

'RS == REGEXP'
     Records are separated by occurrences of characters that match
     REGEXP.  Leading and trailing matches of REGEXP delimit empty
     records.  (This is a 'gawk' extension; it is not specified by the
     POSIX standard.)

   If not in compatibility mode (*note Options::), 'gawk' sets 'RT' to
the input text that matched the value specified by 'RS'.  But if the
input file ended without any text that matches 'RS', then 'gawk' sets
'RT' to the null string.

   ---------- Footnotes ----------

   (1) When 'FS' is the null string ('""') or a regexp, this special
feature of 'RS' does not apply.  It does apply to the default field
separator of a single space: 'FS = " "'.


File: gawk.info,  Node: Getline,  Next: Read Timeout,  Prev: Multiple Line,  Up: Reading Files

4.9 Explicit Input with 'getline'
=================================

So far we have been getting our input data from 'awk''s main input
stream--either the standard input (usually your keyboard, sometimes the
output from another program) or the files specified on the command line.
The 'awk' language has a special built-in command called 'getline' that
can be used to read input under your explicit control.

   The 'getline' command is used in several different ways and should
_not_ be used by beginners.  The examples that follow the explanation of
the 'getline' command include material that has not been covered yet.
Therefore, come back and study the 'getline' command _after_ you have
reviewed the rest of this Info file and have a good knowledge of how
'awk' works.

   The 'getline' command returns 1 if it finds a record and 0 if it
encounters the end of the file.  If there is some error in getting a
record, such as a file that cannot be opened, then 'getline' returns -1.
In this case, 'gawk' sets the variable 'ERRNO' to a string describing
the error that occurred.

   If 'ERRNO' indicates that the I/O operation may be retried, and
'PROCINFO["INPUT", "RETRY"]' is set, then 'getline' returns -2 instead
of -1, and further calls to 'getline' may be attempted.  *Note Retrying
Input:: for further information about this feature.

   In the following examples, COMMAND stands for a string value that
represents a shell command.

     NOTE: When '--sandbox' is specified (*note Options::), reading
     lines from files, pipes, and coprocesses is disabled.

* Menu:

* Plain Getline::               Using 'getline' with no arguments.
* Getline/Variable::            Using 'getline' into a variable.
* Getline/File::                Using 'getline' from a file.
* Getline/Variable/File::       Using 'getline' into a variable from a
                                file.
* Getline/Pipe::                Using 'getline' from a pipe.
* Getline/Variable/Pipe::       Using 'getline' into a variable from a
                                pipe.
* Getline/Coprocess::           Using 'getline' from a coprocess.
* Getline/Variable/Coprocess::  Using 'getline' into a variable from a
                                coprocess.
* Getline Notes::               Important things to know about 'getline'.
* Getline Summary::             Summary of 'getline' Variants.


File: gawk.info,  Node: Plain Getline,  Next: Getline/Variable,  Up: Getline

4.9.1 Using 'getline' with No Arguments
---------------------------------------

The 'getline' command can be used without arguments to read input from
the current input file.  All it does in this case is read the next input
record and split it up into fields.  This is useful if you've finished
processing the current record, but want to do some special processing on
the next record _right now_.  For example:

     # Remove text between /* and */, inclusive
     {
         if ((i = index($0, "/*")) != 0) {
             out = substr($0, 1, i - 1)  # leading part of the string
             rest = substr($0, i + 2)    # ... */ ...
             j = index(rest, "*/")       # is */ in trailing part?
             if (j > 0) {
                 rest = substr(rest, j + 2)  # remove comment
             } else {
                 while (j == 0) {
                     # get more text
                     if (getline <= 0) {
                         print("unexpected EOF or error:", ERRNO) > "/dev/stderr"
                         exit
                     }
                     # build up the line using string concatenation
                     rest = rest $0
                     j = index(rest, "*/")   # is */ in trailing part?
                     if (j != 0) {
                         rest = substr(rest, j + 2)
                         break
                     }
                 }
             }
             # build up the output line using string concatenation
             $0 = out rest
         }
         print $0
     }

   This 'awk' program deletes C-style comments ('/* ... */') from the
input.  It uses a number of features we haven't covered yet, including
string concatenation (*note Concatenation::) and the 'index()' and
'substr()' built-in functions (*note String Functions::).  By replacing
the 'print $0' with other statements, you could perform more complicated
processing on the decommented input, such as searching for matches of a
regular expression.  (This program has a subtle problem--it does not
work if one comment ends and another begins on the same line.)

   This form of the 'getline' command sets 'NF', 'NR', 'FNR', 'RT', and
the value of '$0'.

     NOTE: The new value of '$0' is used to test the patterns of any
     subsequent rules.  The original value of '$0' that triggered the
     rule that executed 'getline' is lost.  By contrast, the 'next'
     statement reads a new record but immediately begins processing it
     normally, starting with the first rule in the program.  *Note Next
     Statement::.


File: gawk.info,  Node: Getline/Variable,  Next: Getline/File,  Prev: Plain Getline,  Up: Getline

4.9.2 Using 'getline' into a Variable
-------------------------------------

You can use 'getline VAR' to read the next record from 'awk''s input
into the variable VAR.  No other processing is done.  For example,
suppose the next line is a comment or a special string, and you want to
read it without triggering any rules.  This form of 'getline' allows you
to read that line and store it in a variable so that the main
read-a-line-and-check-each-rule loop of 'awk' never sees it.  The
following example swaps every two lines of input:

     {
          if ((getline tmp) > 0) {
               print tmp
               print $0
          } else
               print $0
     }

It takes the following list:

     wan
     tew
     free
     phore

and produces these results:

     tew
     wan
     phore
     free

   The 'getline' command used in this way sets only the variables 'NR',
'FNR', and 'RT' (and, of course, VAR).  The record is not split into
fields, so the values of the fields (including '$0') and the value of
'NF' do not change.


File: gawk.info,  Node: Getline/File,  Next: Getline/Variable/File,  Prev: Getline/Variable,  Up: Getline

4.9.3 Using 'getline' from a File
---------------------------------

Use 'getline < FILE' to read the next record from FILE.  Here, FILE is a
string-valued expression that specifies the file name.  '< FILE' is
called a "redirection" because it directs input to come from a different
place.  For example, the following program reads its input record from
the file 'secondary.input' when it encounters a first field with a value
equal to 10 in the current input file:

     {
         if ($1 == 10) {
              getline < "secondary.input"
              print
         } else
              print
     }

   Because the main input stream is not used, the values of 'NR' and
'FNR' are not changed.  However, the record it reads is split into
fields in the normal manner, so the values of '$0' and the other fields
are changed, resulting in a new value of 'NF'.  'RT' is also set.

   According to POSIX, 'getline < EXPRESSION' is ambiguous if EXPRESSION
contains unparenthesized operators other than '$'; for example, 'getline
< dir "/" file' is ambiguous because the concatenation operator (not
discussed yet; *note Concatenation::) is not parenthesized.  You should
write it as 'getline < (dir "/" file)' if you want your program to be
portable to all 'awk' implementations.


File: gawk.info,  Node: Getline/Variable/File,  Next: Getline/Pipe,  Prev: Getline/File,  Up: Getline

4.9.4 Using 'getline' into a Variable from a File
-------------------------------------------------

Use 'getline VAR < FILE' to read input from the file FILE, and put it in
the variable VAR.  As earlier, FILE is a string-valued expression that
specifies the file from which to read.

   In this version of 'getline', none of the predefined variables are
changed and the record is not split into fields.  The only variable
changed is VAR.(1)  For example, the following program copies all the
input files to the output, except for records that say
'@include FILENAME'.  Such a record is replaced by the contents of the
file FILENAME:

     {
          if (NF == 2 && $1 == "@include") {
               while ((getline line < $2) > 0)
                    print line
               close($2)
          } else
               print
     }

   Note here how the name of the extra input file is not built into the
program; it is taken directly from the data, specifically from the
second field on the '@include' line.

   The 'close()' function is called to ensure that if two identical
'@include' lines appear in the input, the entire specified file is
included twice.  *Note Close Files And Pipes::.

   One deficiency of this program is that it does not process nested
'@include' statements (i.e., '@include' statements in included files)
the way a true macro preprocessor would.  *Note Igawk Program:: for a
program that does handle nested '@include' statements.

   ---------- Footnotes ----------

   (1) This is not quite true.  'RT' could be changed if 'RS' is a
regular expression.


File: gawk.info,  Node: Getline/Pipe,  Next: Getline/Variable/Pipe,  Prev: Getline/Variable/File,  Up: Getline

4.9.5 Using 'getline' from a Pipe
---------------------------------

     Omniscience has much to recommend it.  Failing that, attention to
     details would be useful.
                         -- _Brian Kernighan_

   The output of a command can also be piped into 'getline', using
'COMMAND | getline'.  In this case, the string COMMAND is run as a shell
command and its output is piped into 'awk' to be used as input.  This
form of 'getline' reads one record at a time from the pipe.  For
example, the following program copies its input to its output, except
for lines that begin with '@execute', which are replaced by the output
produced by running the rest of the line as a shell command:

     {
          if ($1 == "@execute") {
               tmp = substr($0, 10)        # Remove "@execute"
               while ((tmp | getline) > 0)
                    print
               close(tmp)
          } else
               print
     }

The 'close()' function is called to ensure that if two identical
'@execute' lines appear in the input, the command is run for each one.
*Note Close Files And Pipes::.  Given the input:

     foo
     bar
     baz
     @execute who
     bletch

the program might produce:

     foo
     bar
     baz
     arnold     ttyv0   Jul 13 14:22
     miriam     ttyp0   Jul 13 14:23     (murphy:0)
     bill       ttyp1   Jul 13 14:23     (murphy:0)
     bletch

Notice that this program ran the command 'who' and printed the result.
(If you try this program yourself, you will of course get different
results, depending upon who is logged in on your system.)

   This variation of 'getline' splits the record into fields, sets the
value of 'NF', and recomputes the value of '$0'.  The values of 'NR' and
'FNR' are not changed.  'RT' is set.

   According to POSIX, 'EXPRESSION | getline' is ambiguous if EXPRESSION
contains unparenthesized operators other than '$'--for example, '"echo "
"date" | getline' is ambiguous because the concatenation operator is not
parenthesized.  You should write it as '("echo " "date") | getline' if
you want your program to be portable to all 'awk' implementations.

     NOTE: Unfortunately, 'gawk' has not been consistent in its
     treatment of a construct like '"echo " "date" | getline'.  Most
     versions, including the current version, treat it at as '("echo "
     "date") | getline'.  (This is also how BWK 'awk' behaves.)  Some
     versions instead treat it as '"echo " ("date" | getline)'.  (This
     is how 'mawk' behaves.)  In short, _always_ use explicit
     parentheses, and then you won't have to worry.


File: gawk.info,  Node: Getline/Variable/Pipe,  Next: Getline/Coprocess,  Prev: Getline/Pipe,  Up: Getline

4.9.6 Using 'getline' into a Variable from a Pipe
-------------------------------------------------

When you use 'COMMAND | getline VAR', the output of COMMAND is sent
through a pipe to 'getline' and into the variable VAR.  For example, the
following program reads the current date and time into the variable
'current_time', using the 'date' utility, and then prints it:

     BEGIN {
          "date" | getline current_time
          close("date")
          print "Report printed on " current_time
     }

   In this version of 'getline', none of the predefined variables are
changed and the record is not split into fields.  However, 'RT' is set.

   According to POSIX, 'EXPRESSION | getline VAR' is ambiguous if
EXPRESSION contains unparenthesized operators other than '$'; for
example, '"echo " "date" | getline VAR' is ambiguous because the
concatenation operator is not parenthesized.  You should write it as
'("echo " "date") | getline VAR' if you want your program to be portable
to other 'awk' implementations.


File: gawk.info,  Node: Getline/Coprocess,  Next: Getline/Variable/Coprocess,  Prev: Getline/Variable/Pipe,  Up: Getline

4.9.7 Using 'getline' from a Coprocess
--------------------------------------

Reading input into 'getline' from a pipe is a one-way operation.  The
command that is started with 'COMMAND | getline' only sends data _to_
your 'awk' program.

   On occasion, you might want to send data to another program for
processing and then read the results back.  'gawk' allows you to start a
"coprocess", with which two-way communications are possible.  This is
done with the '|&' operator.  Typically, you write data to the coprocess
first and then read the results back, as shown in the following:

     print "SOME QUERY" |& "db_server"
     "db_server" |& getline

which sends a query to 'db_server' and then reads the results.

   The values of 'NR' and 'FNR' are not changed, because the main input
stream is not used.  However, the record is split into fields in the
normal manner, thus changing the values of '$0', of the other fields,
and of 'NF' and 'RT'.

   Coprocesses are an advanced feature.  They are discussed here only
because this is the minor node on 'getline'.  *Note Two-way I/O::, where
coprocesses are discussed in more detail.


File: gawk.info,  Node: Getline/Variable/Coprocess,  Next: Getline Notes,  Prev: Getline/Coprocess,  Up: Getline

4.9.8 Using 'getline' into a Variable from a Coprocess
------------------------------------------------------

When you use 'COMMAND |& getline VAR', the output from the coprocess
COMMAND is sent through a two-way pipe to 'getline' and into the
variable VAR.

   In this version of 'getline', none of the predefined variables are
changed and the record is not split into fields.  The only variable
changed is VAR.  However, 'RT' is set.

   Coprocesses are an advanced feature.  They are discussed here only
because this is the minor node on 'getline'.  *Note Two-way I/O::, where
coprocesses are discussed in more detail.


File: gawk.info,  Node: Getline Notes,  Next: Getline Summary,  Prev: Getline/Variable/Coprocess,  Up: Getline

4.9.9 Points to Remember About 'getline'
----------------------------------------

Here are some miscellaneous points about 'getline' that you should bear
in mind:

   * When 'getline' changes the value of '$0' and 'NF', 'awk' does _not_
     automatically jump to the start of the program and start testing
     the new record against every pattern.  However, the new record is
     tested against any subsequent rules.

   * Some very old 'awk' implementations limit the number of pipelines
     that an 'awk' program may have open to just one.  In 'gawk', there
     is no such limit.  You can open as many pipelines (and coprocesses)
     as the underlying operating system permits.

   * An interesting side effect occurs if you use 'getline' without a
     redirection inside a 'BEGIN' rule.  Because an unredirected
     'getline' reads from the command-line data files, the first
     'getline' command causes 'awk' to set the value of 'FILENAME'.
     Normally, 'FILENAME' does not have a value inside 'BEGIN' rules,
     because you have not yet started to process the command-line data
     files.  (d.c.)  (See *note BEGIN/END::; also *note Auto-set::.)

   * Using 'FILENAME' with 'getline' ('getline < FILENAME') is likely to
     be a source of confusion.  'awk' opens a separate input stream from
     the current input file.  However, by not using a variable, '$0' and
     'NF' are still updated.  If you're doing this, it's probably by
     accident, and you should reconsider what it is you're trying to
     accomplish.

   * *note Getline Summary::, presents a table summarizing the 'getline'
     variants and which variables they can affect.  It is worth noting
     that those variants that do not use redirection can cause
     'FILENAME' to be updated if they cause 'awk' to start reading a new
     input file.

   * If the variable being assigned is an expression with side effects,
     different versions of 'awk' behave differently upon encountering
     end-of-file.  Some versions don't evaluate the expression; many
     versions (including 'gawk') do.  Here is an example, courtesy of
     Duncan Moore:

          BEGIN {
              system("echo 1 > f")
              while ((getline a[++c] < "f") > 0) { }
              print c
          }

     Here, the side effect is the '++c'.  Is 'c' incremented if
     end-of-file is encountered before the element in 'a' is assigned?

     'gawk' treats 'getline' like a function call, and evaluates the
     expression 'a[++c]' before attempting to read from 'f'.  However,
     some versions of 'awk' only evaluate the expression once they know
     that there is a string value to be assigned.


File: gawk.info,  Node: Getline Summary,  Prev: Getline Notes,  Up: Getline

4.9.10 Summary of 'getline' Variants
------------------------------------

*note Table 4.1: table-getline-variants. summarizes the eight variants
of 'getline', listing which predefined variables are set by each one,
and whether the variant is standard or a 'gawk' extension.  Note: for
each variant, 'gawk' sets the 'RT' predefined variable.

Variant                  Effect                      'awk' / 'gawk'
-------------------------------------------------------------------------
'getline'                Sets '$0', 'NF', 'FNR',     'awk'
                         'NR', and 'RT'
'getline' VAR            Sets VAR, 'FNR', 'NR',      'awk'
                         and 'RT'
'getline <' FILE         Sets '$0', 'NF', and 'RT'   'awk'
'getline VAR < FILE'     Sets VAR and 'RT'           'awk'
COMMAND '| getline'      Sets '$0', 'NF', and 'RT'   'awk'
COMMAND '| getline'      Sets VAR and 'RT'           'awk'
VAR
COMMAND '|& getline'     Sets '$0', 'NF', and 'RT'   'gawk'
COMMAND '|& getline'     Sets VAR and 'RT'           'gawk'
VAR

Table 4.1: 'getline' variants and what they set


File: gawk.info,  Node: Read Timeout,  Next: Retrying Input,  Prev: Getline,  Up: Reading Files

4.10 Reading Input with a Timeout
=================================

This minor node describes a feature that is specific to 'gawk'.

   You may specify a timeout in milliseconds for reading input from the
keyboard, a pipe, or two-way communication, including TCP/IP sockets.
This can be done on a per-input, per-command, or per-connection basis,
by setting a special element in the 'PROCINFO' array (*note Auto-set::):

     PROCINFO["input_name", "READ_TIMEOUT"] = TIMEOUT IN MILLISECONDS

   When set, this causes 'gawk' to time out and return failure if no
data is available to read within the specified timeout period.  For
example, a TCP client can decide to give up on receiving any response
from the server after a certain amount of time:

     Service = "/inet/tcp/0/localhost/daytime"
     PROCINFO[Service, "READ_TIMEOUT"] = 100
     if ((Service |& getline) > 0)
         print $0
     else if (ERRNO != "")
         print ERRNO

   Here is how to read interactively from the user(1) without waiting
for more than five seconds:

     PROCINFO["/dev/stdin", "READ_TIMEOUT"] = 5000
     while ((getline < "/dev/stdin") > 0)
         print $0

   'gawk' terminates the read operation if input does not arrive after
waiting for the timeout period, returns failure, and sets 'ERRNO' to an
appropriate string value.  A negative or zero value for the timeout is
the same as specifying no timeout at all.

   A timeout can also be set for reading from the keyboard in the
implicit loop that reads input records and matches them against
patterns, like so:

     $ gawk 'BEGIN { PROCINFO["-", "READ_TIMEOUT"] = 5000 }
     > { print "You entered: " $0 }'
     gawk
     -| You entered: gawk

   In this case, failure to respond within five seconds results in the
following error message:

     error-> gawk: cmd. line:2: (FILENAME=- FNR=1) fatal: error reading input file `-': Connection timed out

   The timeout can be set or changed at any time, and will take effect
on the next attempt to read from the input device.  In the following
example, we start with a timeout value of one second, and progressively
reduce it by one-tenth of a second until we wait indefinitely for the
input to arrive:

     PROCINFO[Service, "READ_TIMEOUT"] = 1000
     while ((Service |& getline) > 0) {
         print $0
         PROCINFO[Service, "READ_TIMEOUT"] -= 100
     }

     NOTE: You should not assume that the read operation will block
     exactly after the tenth record has been printed.  It is possible
     that 'gawk' will read and buffer more than one record's worth of
     data the first time.  Because of this, changing the value of
     timeout like in the preceding example is not very useful.

   If the 'PROCINFO' element is not present and the 'GAWK_READ_TIMEOUT'
environment variable exists, 'gawk' uses its value to initialize the
timeout value.  The exclusive use of the environment variable to specify
timeout has the disadvantage of not being able to control it on a
per-command or per-connection basis.

   'gawk' considers a timeout event to be an error even though the
attempt to read from the underlying device may succeed in a later
attempt.  This is a limitation, and it also means that you cannot use
this to multiplex input from two or more sources.  *Note Retrying
Input:: for a way to enable later I/O attempts to succeed.

   Assigning a timeout value prevents read operations from blocking
indefinitely.  But bear in mind that there are other ways 'gawk' can
stall waiting for an input device to be ready.  A network client can
sometimes take a long time to establish a connection before it can start
reading any data, or the attempt to open a FIFO special file for reading
can block indefinitely until some other process opens it for writing.

   ---------- Footnotes ----------

   (1) This assumes that standard input is the keyboard.


File: gawk.info,  Node: Retrying Input,  Next: Command-line directories,  Prev: Read Timeout,  Up: Reading Files

4.11 Retrying Reads After Certain Input Errors
==============================================

This minor node describes a feature that is specific to 'gawk'.

   When 'gawk' encounters an error while reading input, by default
'getline' returns -1, and subsequent attempts to read from that file
result in an end-of-file indication.  However, you may optionally
instruct 'gawk' to allow I/O to be retried when certain errors are
encountered by setting a special element in the 'PROCINFO' array (*note
Auto-set::):

     PROCINFO["INPUT_NAME", "RETRY"] = 1

   When this element exists, 'gawk' checks the value of the system (C
language) 'errno' variable when an I/O error occurs.  If 'errno'
indicates a subsequent I/O attempt may succeed, 'getline' instead
returns -2 and further calls to 'getline' may succeed.  This applies to
the 'errno' values 'EAGAIN', 'EWOULDBLOCK', 'EINTR', or 'ETIMEDOUT'.

   This feature is useful in conjunction with 'PROCINFO["INPUT_NAME",
"READ_TIMEOUT"]' or situations where a file descriptor has been
configured to behave in a non-blocking fashion.


File: gawk.info,  Node: Command-line directories,  Next: Input Summary,  Prev: Retrying Input,  Up: Reading Files

4.12 Directories on the Command Line
====================================

According to the POSIX standard, files named on the 'awk' command line
must be text files; it is a fatal error if they are not.  Most versions
of 'awk' treat a directory on the command line as a fatal error.

   By default, 'gawk' produces a warning for a directory on the command
line, but otherwise ignores it.  This makes it easier to use shell
wildcards with your 'awk' program:

     $ gawk -f whizprog.awk *        Directories could kill this program

   If either of the '--posix' or '--traditional' options is given, then
'gawk' reverts to treating a directory on the command line as a fatal
error.

   *Note Extension Sample Readdir:: for a way to treat directories as
usable data from an 'awk' program.


File: gawk.info,  Node: Input Summary,  Next: Input Exercises,  Prev: Command-line directories,  Up: Reading Files

4.13 Summary
============

   * Input is split into records based on the value of 'RS'.  The
     possibilities are as follows:

     Value of 'RS'      Records are split on      'awk' / 'gawk'
                        ...
     ---------------------------------------------------------------------------
     Any single         That character            'awk'
     character
     The empty string   Runs of two or more       'awk'
     ('""')             newlines
     A regexp           Text that matches the     'gawk'
                        regexp

   * 'FNR' indicates how many records have been read from the current
     input file; 'NR' indicates how many records have been read in
     total.

   * 'gawk' sets 'RT' to the text matched by 'RS'.

   * After splitting the input into records, 'awk' further splits the
     records into individual fields, named '$1', '$2', and so on.  '$0'
     is the whole record, and 'NF' indicates how many fields there are.
     The default way to split fields is between whitespace characters.

   * Fields may be referenced using a variable, as in '$NF'.  Fields may
     also be assigned values, which causes the value of '$0' to be
     recomputed when it is later referenced.  Assigning to a field with
     a number greater than 'NF' creates the field and rebuilds the
     record, using 'OFS' to separate the fields.  Incrementing 'NF' does
     the same thing.  Decrementing 'NF' throws away fields and rebuilds
     the record.

   * Field splitting is more complicated than record splitting:

     Field separator value         Fields are split ...          'awk' /
                                                                 'gawk'
     ---------------------------------------------------------------------------
     'FS == " "'                   On runs of whitespace         'awk'
     'FS == ANY SINGLE             On that character             'awk'
     CHARACTER'
     'FS == REGEXP'                On text matching the regexp   'awk'
     'FS == ""'                    Such that each individual     'gawk'
                                   character is a separate
                                   field
     'FIELDWIDTHS == LIST OF       Based on character position   'gawk'
     COLUMNS'
     'FPAT == REGEXP'              On the text surrounding       'gawk'
                                   text matching the regexp

   * Using 'FS = "\n"' causes the entire record to be a single field
     (assuming that newlines separate records).

   * 'FS' may be set from the command line using the '-F' option.  This
     can also be done using command-line variable assignment.

   * Use 'PROCINFO["FS"]' to see how fields are being split.

   * Use 'getline' in its various forms to read additional records from
     the default input stream, from a file, or from a pipe or coprocess.

   * Use 'PROCINFO[FILE, "READ_TIMEOUT"]' to cause reads to time out for
     FILE.

   * Directories on the command line are fatal for standard 'awk';
     'gawk' ignores them if not in POSIX mode.


File: gawk.info,  Node: Input Exercises,  Prev: Input Summary,  Up: Reading Files

4.14 Exercises
==============

  1. Using the 'FIELDWIDTHS' variable (*note Constant Size::), write a
     program to read election data, where each record represents one
     voter's votes.  Come up with a way to define which columns are
     associated with each ballot item, and print the total votes,
     including abstentions, for each item.

  2. *note Plain Getline::, presented a program to remove C-style
     comments ('/* ... */') from the input.  That program does not work
     if one comment ends on one line and another one starts later on the
     same line.  That can be fixed by making one simple change.  What is
     it?


File: gawk.info,  Node: Printing,  Next: Expressions,  Prev: Reading Files,  Up: Top

5 Printing Output
*****************

One of the most common programming actions is to "print", or output,
some or all of the input.  Use the 'print' statement for simple output,
and the 'printf' statement for fancier formatting.  The 'print'
statement is not limited when computing _which_ values to print.
However, with two exceptions, you cannot specify _how_ to print
them--how many columns, whether to use exponential notation or not, and
so on.  (For the exceptions, *note Output Separators:: and *note
OFMT::.)  For printing with specifications, you need the 'printf'
statement (*note Printf::).

   Besides basic and formatted printing, this major node also covers I/O
redirections to files and pipes, introduces the special file names that
'gawk' processes internally, and discusses the 'close()' built-in
function.

* Menu:

* Print::                       The 'print' statement.
* Print Examples::              Simple examples of 'print' statements.
* Output Separators::           The output separators and how to change them.
* OFMT::                        Controlling Numeric Output With 'print'.
* Printf::                      The 'printf' statement.
* Redirection::                 How to redirect output to multiple files and
                                pipes.
* Special FD::                  Special files for I/O.
* Special Files::               File name interpretation in 'gawk'.
                                'gawk' allows access to inherited file
                                descriptors.
* Close Files And Pipes::       Closing Input and Output Files and Pipes.
* Nonfatal::                    Enabling Nonfatal Output.
* Output Summary::              Output summary.
* Output Exercises::            Exercises.


File: gawk.info,  Node: Print,  Next: Print Examples,  Up: Printing

5.1 The 'print' Statement
=========================

Use the 'print' statement to produce output with simple, standardized
formatting.  You specify only the strings or numbers to print, in a list
separated by commas.  They are output, separated by single spaces,
followed by a newline.  The statement looks like this:

     print ITEM1, ITEM2, ...

The entire list of items may be optionally enclosed in parentheses.  The
parentheses are necessary if any of the item expressions uses the '>'
relational operator; otherwise it could be confused with an output
redirection (*note Redirection::).

   The items to print can be constant strings or numbers, fields of the
current record (such as '$1'), variables, or any 'awk' expression.
Numeric values are converted to strings and then printed.

   The simple statement 'print' with no items is equivalent to 'print
$0': it prints the entire current record.  To print a blank line, use
'print ""'.  To print a fixed piece of text, use a string constant, such
as '"Don't Panic"', as one item.  If you forget to use the double-quote
characters, your text is taken as an 'awk' expression, and you will
probably get an error.  Keep in mind that a space is printed between any
two items.

   Note that the 'print' statement is a statement and not an
expression--you can't use it in the pattern part of a pattern-action
statement, for example.


File: gawk.info,  Node: Print Examples,  Next: Output Separators,  Prev: Print,  Up: Printing

5.2 'print' Statement Examples
==============================

Each 'print' statement makes at least one line of output.  However, it
isn't limited to only one line.  If an item value is a string containing
a newline, the newline is output along with the rest of the string.  A
single 'print' statement can make any number of lines this way.

   The following is an example of printing a string that contains
embedded newlines (the '\n' is an escape sequence, used to represent the
newline character; *note Escape Sequences::):

     $ awk 'BEGIN { print "line one\nline two\nline three" }'
     -| line one
     -| line two
     -| line three

   The next example, which is run on the 'inventory-shipped' file,
prints the first two fields of each input record, with a space between
them:

     $ awk '{ print $1, $2 }' inventory-shipped
     -| Jan 13
     -| Feb 15
     -| Mar 15
     ...

   A common mistake in using the 'print' statement is to omit the comma
between two items.  This often has the effect of making the items run
together in the output, with no space.  The reason for this is that
juxtaposing two string expressions in 'awk' means to concatenate them.
Here is the same program, without the comma:

     $ awk '{ print $1 $2 }' inventory-shipped
     -| Jan13
     -| Feb15
     -| Mar15
     ...

   To someone unfamiliar with the 'inventory-shipped' file, neither
example's output makes much sense.  A heading line at the beginning
would make it clearer.  Let's add some headings to our table of months
('$1') and green crates shipped ('$2').  We do this using a 'BEGIN' rule
(*note BEGIN/END::) so that the headings are only printed once:

     awk 'BEGIN {  print "Month Crates"
                   print "----- ------" }
                {  print $1, $2 }' inventory-shipped

When run, the program prints the following:

     Month Crates
     ----- ------
     Jan 13
     Feb 15
     Mar 15
     ...

The only problem, however, is that the headings and the table data don't
line up!  We can fix this by printing some spaces between the two
fields:

     awk 'BEGIN { print "Month Crates"
                  print "----- ------" }
                { print $1, "     ", $2 }' inventory-shipped

   Lining up columns this way can get pretty complicated when there are
many columns to fix.  Counting spaces for two or three columns is
simple, but any more than this can take up a lot of time.  This is why
the 'printf' statement was created (*note Printf::); one of its
specialties is lining up columns of data.

     NOTE: You can continue either a 'print' or 'printf' statement
     simply by putting a newline after any comma (*note
     Statements/Lines::).


File: gawk.info,  Node: Output Separators,  Next: OFMT,  Prev: Print Examples,  Up: Printing

5.3 Output Separators
=====================

As mentioned previously, a 'print' statement contains a list of items
separated by commas.  In the output, the items are normally separated by
single spaces.  However, this doesn't need to be the case; a single
space is simply the default.  Any string of characters may be used as
the "output field separator" by setting the predefined variable 'OFS'.
The initial value of this variable is the string '" "' (i.e., a single
space).

   The output from an entire 'print' statement is called an "output
record".  Each 'print' statement outputs one output record, and then
outputs a string called the "output record separator" (or 'ORS').  The
initial value of 'ORS' is the string '"\n"' (i.e., a newline character).
Thus, each 'print' statement normally makes a separate line.

   In order to change how output fields and records are separated,
assign new values to the variables 'OFS' and 'ORS'.  The usual place to
do this is in the 'BEGIN' rule (*note BEGIN/END::), so that it happens
before any input is processed.  It can also be done with assignments on
the command line, before the names of the input files, or using the '-v'
command-line option (*note Options::).  The following example prints the
first and second fields of each input record, separated by a semicolon,
with a blank line added after each newline:

     $ awk 'BEGIN { OFS = ";"; ORS = "\n\n" }
     >            { print $1, $2 }' mail-list
     -| Amelia;555-5553
     -|
     -| Anthony;555-3412
     -|
     -| Becky;555-7685
     -|
     -| Bill;555-1675
     -|
     -| Broderick;555-0542
     -|
     -| Camilla;555-2912
     -|
     -| Fabius;555-1234
     -|
     -| Julie;555-6699
     -|
     -| Martin;555-6480
     -|
     -| Samuel;555-3430
     -|
     -| Jean-Paul;555-2127
     -|

   If the value of 'ORS' does not contain a newline, the program's
output runs together on a single line.


File: gawk.info,  Node: OFMT,  Next: Printf,  Prev: Output Separators,  Up: Printing

5.4 Controlling Numeric Output with 'print'
===========================================

When printing numeric values with the 'print' statement, 'awk'
internally converts each number to a string of characters and prints
that string.  'awk' uses the 'sprintf()' function to do this conversion
(*note String Functions::).  For now, it suffices to say that the
'sprintf()' function accepts a "format specification" that tells it how
to format numbers (or strings), and that there are a number of different
ways in which numbers can be formatted.  The different format
specifications are discussed more fully in *note Control Letters::.

   The predefined variable 'OFMT' contains the format specification that
'print' uses with 'sprintf()' when it wants to convert a number to a
string for printing.  The default value of 'OFMT' is '"%.6g"'.  The way
'print' prints numbers can be changed by supplying a different format
specification for the value of 'OFMT', as shown in the following
example:

     $ awk 'BEGIN {
     >   OFMT = "%.0f"  # print numbers as integers (rounds)
     >   print 17.23, 17.54 }'
     -| 17 18

According to the POSIX standard, 'awk''s behavior is undefined if 'OFMT'
contains anything but a floating-point conversion specification.  (d.c.)


File: gawk.info,  Node: Printf,  Next: Redirection,  Prev: OFMT,  Up: Printing

5.5 Using 'printf' Statements for Fancier Printing
==================================================

For more precise control over the output format than what is provided by
'print', use 'printf'.  With 'printf' you can specify the width to use
for each item, as well as various formatting choices for numbers (such
as what output base to use, whether to print an exponent, whether to
print a sign, and how many digits to print after the decimal point).

* Menu:

* Basic Printf::                Syntax of the 'printf' statement.
* Control Letters::             Format-control letters.
* Format Modifiers::            Format-specification modifiers.
* Printf Examples::             Several examples.


File: gawk.info,  Node: Basic Printf,  Next: Control Letters,  Up: Printf

5.5.1 Introduction to the 'printf' Statement
--------------------------------------------

A simple 'printf' statement looks like this:

     printf FORMAT, ITEM1, ITEM2, ...

As for 'print', the entire list of arguments may optionally be enclosed
in parentheses.  Here too, the parentheses are necessary if any of the
item expressions uses the '>' relational operator; otherwise, it can be
confused with an output redirection (*note Redirection::).

   The difference between 'printf' and 'print' is the FORMAT argument.
This is an expression whose value is taken as a string; it specifies how
to output each of the other arguments.  It is called the "format
string".

   The format string is very similar to that in the ISO C library
function 'printf()'.  Most of FORMAT is text to output verbatim.
Scattered among this text are "format specifiers"--one per item.  Each
format specifier says to output the next item in the argument list at
that place in the format.

   The 'printf' statement does not automatically append a newline to its
output.  It outputs only what the format string specifies.  So if a
newline is needed, you must include one in the format string.  The
output separator variables 'OFS' and 'ORS' have no effect on 'printf'
statements.  For example:

     $ awk 'BEGIN {
     >    ORS = "\nOUCH!\n"; OFS = "+"
     >    msg = "Don\47t Panic!"
     >    printf "%s\n", msg
     > }'
     -| Don't Panic!

Here, neither the '+' nor the 'OUCH!' appears in the output message.


File: gawk.info,  Node: Control Letters,  Next: Format Modifiers,  Prev: Basic Printf,  Up: Printf

5.5.2 Format-Control Letters
----------------------------

A format specifier starts with the character '%' and ends with a
"format-control letter"--it tells the 'printf' statement how to output
one item.  The format-control letter specifies what _kind_ of value to
print.  The rest of the format specifier is made up of optional
"modifiers" that control _how_ to print the value, such as the field
width.  Here is a list of the format-control letters:

'%c'
     Print a number as a character; thus, 'printf "%c", 65' outputs the
     letter 'A'.  The output for a string value is the first character
     of the string.

          NOTE: The POSIX standard says the first character of a string
          is printed.  In locales with multibyte characters, 'gawk'
          attempts to convert the leading bytes of the string into a
          valid wide character and then to print the multibyte encoding
          of that character.  Similarly, when printing a numeric value,
          'gawk' allows the value to be within the numeric range of
          values that can be held in a wide character.  If the
          conversion to multibyte encoding fails, 'gawk' uses the low
          eight bits of the value as the character to print.

          Other 'awk' versions generally restrict themselves to printing
          the first byte of a string or to numeric values within the
          range of a single byte (0-255).

'%d', '%i'
     Print a decimal integer.  The two control letters are equivalent.
     (The '%i' specification is for compatibility with ISO C.)

'%e', '%E'
     Print a number in scientific (exponential) notation.  For example:

          printf "%4.3e\n", 1950

     prints '1.950e+03', with a total of four significant figures, three
     of which follow the decimal point.  (The '4.3' represents two
     modifiers, discussed in the next node.)  '%E' uses 'E' instead of
     'e' in the output.

'%f'
     Print a number in floating-point notation.  For example:

          printf "%4.3f", 1950

     prints '1950.000', with a total of four significant figures, three
     of which follow the decimal point.  (The '4.3' represents two
     modifiers, discussed in the next node.)

     On systems supporting IEEE 754 floating-point format, values
     representing negative infinity are formatted as '-inf' or
     '-infinity', and positive infinity as 'inf' or 'infinity'.  The
     special "not a number" value formats as '-nan' or 'nan' (*note Math
     Definitions::).

'%F'
     Like '%f', but the infinity and "not a number" values are spelled
     using uppercase letters.

     The '%F' format is a POSIX extension to ISO C; not all systems
     support it.  On those that don't, 'gawk' uses '%f' instead.

'%g', '%G'
     Print a number in either scientific notation or in floating-point
     notation, whichever uses fewer characters; if the result is printed
     in scientific notation, '%G' uses 'E' instead of 'e'.

'%o'
     Print an unsigned octal integer (*note Nondecimal-numbers::).

'%s'
     Print a string.

'%u'
     Print an unsigned decimal integer.  (This format is of marginal
     use, because all numbers in 'awk' are floating point; it is
     provided primarily for compatibility with C.)

'%x', '%X'
     Print an unsigned hexadecimal integer; '%X' uses the letters 'A'
     through 'F' instead of 'a' through 'f' (*note
     Nondecimal-numbers::).

'%%'
     Print a single '%'.  This does not consume an argument and it
     ignores any modifiers.

     NOTE: When using the integer format-control letters for values that
     are outside the range of the widest C integer type, 'gawk' switches
     to the '%g' format specifier.  If '--lint' is provided on the
     command line (*note Options::), 'gawk' warns about this.  Other
     versions of 'awk' may print invalid values or do something else
     entirely.  (d.c.)


File: gawk.info,  Node: Format Modifiers,  Next: Printf Examples,  Prev: Control Letters,  Up: Printf

5.5.3 Modifiers for 'printf' Formats
------------------------------------

A format specification can also include "modifiers" that can control how
much of the item's value is printed, as well as how much space it gets.
The modifiers come between the '%' and the format-control letter.  We
use the bullet symbol "*" in the following examples to represent spaces
in the output.  Here are the possible modifiers, in the order in which
they may appear:

'N$'
     An integer constant followed by a '$' is a "positional specifier".
     Normally, format specifications are applied to arguments in the
     order given in the format string.  With a positional specifier, the
     format specification is applied to a specific argument, instead of
     what would be the next argument in the list.  Positional specifiers
     begin counting with one.  Thus:

          printf "%s %s\n", "don't", "panic"
          printf "%2$s %1$s\n", "panic", "don't"

     prints the famous friendly message twice.

     At first glance, this feature doesn't seem to be of much use.  It
     is in fact a 'gawk' extension, intended for use in translating
     messages at runtime.  *Note Printf Ordering::, which describes how
     and why to use positional specifiers.  For now, we ignore them.

'-' (Minus)
     The minus sign, used before the width modifier (see later on in
     this list), says to left-justify the argument within its specified
     width.  Normally, the argument is printed right-justified in the
     specified width.  Thus:

          printf "%-4s", "foo"

     prints 'foo*'.

SPACE
     For numeric conversions, prefix positive values with a space and
     negative values with a minus sign.

'+'
     The plus sign, used before the width modifier (see later on in this
     list), says to always supply a sign for numeric conversions, even
     if the data to format is positive.  The '+' overrides the space
     modifier.

'#'
     Use an "alternative form" for certain control letters.  For '%o',
     supply a leading zero.  For '%x' and '%X', supply a leading '0x' or
     '0X' for a nonzero result.  For '%e', '%E', '%f', and '%F', the
     result always contains a decimal point.  For '%g' and '%G',
     trailing zeros are not removed from the result.

'0'
     A leading '0' (zero) acts as a flag indicating that output should
     be padded with zeros instead of spaces.  This applies only to the
     numeric output formats.  This flag only has an effect when the
     field width is wider than the value to print.

'''
     A single quote or apostrophe character is a POSIX extension to ISO
     C. It indicates that the integer part of a floating-point value, or
     the entire part of an integer decimal value, should have a
     thousands-separator character in it.  This only works in locales
     that support such characters.  For example:

          $ cat thousands.awk          Show source program
          -| BEGIN { printf "%'d\n", 1234567 }
          $ LC_ALL=C gawk -f thousands.awk
          -| 1234567                   Results in "C" locale
          $ LC_ALL=en_US.UTF-8 gawk -f thousands.awk
          -| 1,234,567                 Results in US English UTF locale

     For more information about locales and internationalization issues,
     see *note Locales::.

          NOTE: The ''' flag is a nice feature, but its use complicates
          things: it becomes difficult to use it in command-line
          programs.  For information on appropriate quoting tricks, see
          *note Quoting::.

WIDTH
     This is a number specifying the desired minimum width of a field.
     Inserting any number between the '%' sign and the format-control
     character forces the field to expand to this width.  The default
     way to do this is to pad with spaces on the left.  For example:

          printf "%4s", "foo"

     prints '*foo'.

     The value of WIDTH is a minimum width, not a maximum.  If the item
     value requires more than WIDTH characters, it can be as wide as
     necessary.  Thus, the following:

          printf "%4s", "foobar"

     prints 'foobar'.

     Preceding the WIDTH with a minus sign causes the output to be
     padded with spaces on the right, instead of on the left.

'.PREC'
     A period followed by an integer constant specifies the precision to
     use when printing.  The meaning of the precision varies by control
     letter:

     '%d', '%i', '%o', '%u', '%x', '%X'
          Minimum number of digits to print.

     '%e', '%E', '%f', '%F'
          Number of digits to the right of the decimal point.

     '%g', '%G'
          Maximum number of significant digits.

     '%s'
          Maximum number of characters from the string that should
          print.

     Thus, the following:

          printf "%.4s", "foobar"

     prints 'foob'.

   The C library 'printf''s dynamic WIDTH and PREC capability (e.g.,
'"%*.*s"') is supported.  Instead of supplying explicit WIDTH and/or
PREC values in the format string, they are passed in the argument list.
For example:

     w = 5
     p = 3
     s = "abcdefg"
     printf "%*.*s\n", w, p, s

is exactly equivalent to:

     s = "abcdefg"
     printf "%5.3s\n", s

Both programs output '**abc'.  Earlier versions of 'awk' did not support
this capability.  If you must use such a version, you may simulate this
feature by using concatenation to build up the format string, like so:

     w = 5
     p = 3
     s = "abcdefg"
     printf "%" w "." p "s\n", s

This is not particularly easy to read, but it does work.

   C programmers may be used to supplying additional modifiers ('h',
'j', 'l', 'L', 't', and 'z') in 'printf' format strings.  These are not
valid in 'awk'.  Most 'awk' implementations silently ignore them.  If
'--lint' is provided on the command line (*note Options::), 'gawk' warns
about their use.  If '--posix' is supplied, their use is a fatal error.


File: gawk.info,  Node: Printf Examples,  Prev: Format Modifiers,  Up: Printf

5.5.4 Examples Using 'printf'
-----------------------------

The following simple example shows how to use 'printf' to make an
aligned table:

     awk '{ printf "%-10s %s\n", $1, $2 }' mail-list

This command prints the names of the people ('$1') in the file
'mail-list' as a string of 10 characters that are left-justified.  It
also prints the phone numbers ('$2') next on the line.  This produces an
aligned two-column table of names and phone numbers, as shown here:

     $ awk '{ printf "%-10s %s\n", $1, $2 }' mail-list
     -| Amelia     555-5553
     -| Anthony    555-3412
     -| Becky      555-7685
     -| Bill       555-1675
     -| Broderick  555-0542
     -| Camilla    555-2912
     -| Fabius     555-1234
     -| Julie      555-6699
     -| Martin     555-6480
     -| Samuel     555-3430
     -| Jean-Paul  555-2127

   In this case, the phone numbers had to be printed as strings because
the numbers are separated by dashes.  Printing the phone numbers as
numbers would have produced just the first three digits: '555'.  This
would have been pretty confusing.

   It wasn't necessary to specify a width for the phone numbers because
they are last on their lines.  They don't need to have spaces after
them.

   The table could be made to look even nicer by adding headings to the
tops of the columns.  This is done using a 'BEGIN' rule (*note
BEGIN/END::) so that the headers are only printed once, at the beginning
of the 'awk' program:

     awk 'BEGIN { print "Name      Number"
                  print "----      ------" }
                { printf "%-10s %s\n", $1, $2 }' mail-list

   The preceding example mixes 'print' and 'printf' statements in the
same program.  Using just 'printf' statements can produce the same
results:

     awk 'BEGIN { printf "%-10s %s\n", "Name", "Number"
                  printf "%-10s %s\n", "----", "------" }
                { printf "%-10s %s\n", $1, $2 }' mail-list

Printing each column heading with the same format specification used for
the column elements ensures that the headings are aligned just like the
columns.

   The fact that the same format specification is used three times can
be emphasized by storing it in a variable, like this:

     awk 'BEGIN { format = "%-10s %s\n"
                  printf format, "Name", "Number"
                  printf format, "----", "------" }
                { printf format, $1, $2 }' mail-list


File: gawk.info,  Node: Redirection,  Next: Special FD,  Prev: Printf,  Up: Printing

5.6 Redirecting Output of 'print' and 'printf'
==============================================

So far, the output from 'print' and 'printf' has gone to the standard
output, usually the screen.  Both 'print' and 'printf' can also send
their output to other places.  This is called "redirection".

     NOTE: When '--sandbox' is specified (*note Options::), redirecting
     output to files, pipes, and coprocesses is disabled.

   A redirection appears after the 'print' or 'printf' statement.
Redirections in 'awk' are written just like redirections in shell
commands, except that they are written inside the 'awk' program.

   There are four forms of output redirection: output to a file, output
appended to a file, output through a pipe to another command, and output
to a coprocess.  We show them all for the 'print' statement, but they
work identically for 'printf':

'print ITEMS > OUTPUT-FILE'
     This redirection prints the items into the output file named
     OUTPUT-FILE.  The file name OUTPUT-FILE can be any expression.  Its
     value is changed to a string and then used as a file name (*note
     Expressions::).

     When this type of redirection is used, the OUTPUT-FILE is erased
     before the first output is written to it.  Subsequent writes to the
     same OUTPUT-FILE do not erase OUTPUT-FILE, but append to it.  (This
     is different from how you use redirections in shell scripts.)  If
     OUTPUT-FILE does not exist, it is created.  For example, here is
     how an 'awk' program can write a list of peoples' names to one file
     named 'name-list', and a list of phone numbers to another file
     named 'phone-list':

          $ awk '{ print $2 > "phone-list"
          >        print $1 > "name-list" }' mail-list
          $ cat phone-list
          -| 555-5553
          -| 555-3412
          ...
          $ cat name-list
          -| Amelia
          -| Anthony
          ...

     Each output file contains one name or number per line.

'print ITEMS >> OUTPUT-FILE'
     This redirection prints the items into the preexisting output file
     named OUTPUT-FILE.  The difference between this and the single-'>'
     redirection is that the old contents (if any) of OUTPUT-FILE are
     not erased.  Instead, the 'awk' output is appended to the file.  If
     OUTPUT-FILE does not exist, then it is created.

'print ITEMS | COMMAND'
     It is possible to send output to another program through a pipe
     instead of into a file.  This redirection opens a pipe to COMMAND,
     and writes the values of ITEMS through this pipe to another process
     created to execute COMMAND.

     The redirection argument COMMAND is actually an 'awk' expression.
     Its value is converted to a string whose contents give the shell
     command to be run.  For example, the following produces two files,
     one unsorted list of peoples' names, and one list sorted in reverse
     alphabetical order:

          awk '{ print $1 > "names.unsorted"
                 command = "sort -r > names.sorted"
                 print $1 | command }' mail-list

     The unsorted list is written with an ordinary redirection, while
     the sorted list is written by piping through the 'sort' utility.

     The next example uses redirection to mail a message to the mailing
     list 'bug-system'.  This might be useful when trouble is
     encountered in an 'awk' script run periodically for system
     maintenance:

          report = "mail bug-system"
          print("Awk script failed:", $0) | report
          print("at record number", FNR, "of", FILENAME) | report
          close(report)

     The 'close()' function is called here because it's a good idea to
     close the pipe as soon as all the intended output has been sent to
     it.  *Note Close Files And Pipes:: for more information.

     This example also illustrates the use of a variable to represent a
     FILE or COMMAND--it is not necessary to always use a string
     constant.  Using a variable is generally a good idea, because (if
     you mean to refer to that same file or command) 'awk' requires that
     the string value be written identically every time.

'print ITEMS |& COMMAND'
     This redirection prints the items to the input of COMMAND.  The
     difference between this and the single-'|' redirection is that the
     output from COMMAND can be read with 'getline'.  Thus, COMMAND is a
     "coprocess", which works together with but is subsidiary to the
     'awk' program.

     This feature is a 'gawk' extension, and is not available in POSIX
     'awk'.  *Note Getline/Coprocess::, for a brief discussion.  *Note
     Two-way I/O::, for a more complete discussion.

   Redirecting output using '>', '>>', '|', or '|&' asks the system to
open a file, pipe, or coprocess only if the particular FILE or COMMAND
you specify has not already been written to by your program or if it has
been closed since it was last written to.

   It is a common error to use '>' redirection for the first 'print' to
a file, and then to use '>>' for subsequent output:

     # clear the file
     print "Don't panic" > "guide.txt"
     ...
     # append
     print "Avoid improbability generators" >> "guide.txt"

This is indeed how redirections must be used from the shell.  But in
'awk', it isn't necessary.  In this kind of case, a program should use
'>' for all the 'print' statements, because the output file is only
opened once.  (It happens that if you mix '>' and '>>' output is
produced in the expected order.  However, mixing the operators for the
same file is definitely poor style, and is confusing to readers of your
program.)

   Many older 'awk' implementations limit the number of pipelines that
an 'awk' program may have open to just one!  In 'gawk', there is no such
limit.  'gawk' allows a program to open as many pipelines as the
underlying operating system permits.

                           Piping into 'sh'

   A particularly powerful way to use redirection is to build command
lines and pipe them into the shell, 'sh'.  For example, suppose you have
a list of files brought over from a system where all the file names are
stored in uppercase, and you wish to rename them to have names in all
lowercase.  The following program is both simple and efficient:

     { printf("mv %s %s\n", $0, tolower($0)) | "sh" }

     END { close("sh") }

   The 'tolower()' function returns its argument string with all
uppercase characters converted to lowercase (*note String Functions::).
The program builds up a list of command lines, using the 'mv' utility to
rename the files.  It then sends the list to the shell for execution.

   *Note Shell Quoting:: for a function that can help in generating
command lines to be fed to the shell.


File: gawk.info,  Node: Special FD,  Next: Special Files,  Prev: Redirection,  Up: Printing

5.7 Special Files for Standard Preopened Data Streams
=====================================================

Running programs conventionally have three input and output streams
already available to them for reading and writing.  These are known as
the "standard input", "standard output", and "standard error output".
These open streams (and any other open files or pipes) are often
referred to by the technical term "file descriptors".

   These streams are, by default, connected to your keyboard and screen,
but they are often redirected with the shell, via the '<', '<<', '>',
'>>', '>&', and '|' operators.  Standard error is typically used for
writing error messages; the reason there are two separate streams,
standard output and standard error, is so that they can be redirected
separately.

   In traditional implementations of 'awk', the only way to write an
error message to standard error in an 'awk' program is as follows:

     print "Serious error detected!" | "cat 1>&2"

This works by opening a pipeline to a shell command that can access the
standard error stream that it inherits from the 'awk' process.  This is
far from elegant, and it also requires a separate process.  So people
writing 'awk' programs often don't do this.  Instead, they send the
error messages to the screen, like this:

     print "Serious error detected!" > "/dev/tty"

('/dev/tty' is a special file supplied by the operating system that is
connected to your keyboard and screen.  It represents the "terminal,"(1)
which on modern systems is a keyboard and screen, not a serial console.)
This generally has the same effect, but not always: although the
standard error stream is usually the screen, it can be redirected; when
that happens, writing to the screen is not correct.  In fact, if 'awk'
is run from a background job, it may not have a terminal at all.  Then
opening '/dev/tty' fails.

   'gawk', BWK 'awk', and 'mawk' provide special file names for
accessing the three standard streams.  If the file name matches one of
these special names when 'gawk' (or one of the others) redirects input
or output, then it directly uses the descriptor that the file name
stands for.  These special file names work for all operating systems
that 'gawk' has been ported to, not just those that are POSIX-compliant:

'/dev/stdin'
     The standard input (file descriptor 0).

'/dev/stdout'
     The standard output (file descriptor 1).

'/dev/stderr'
     The standard error output (file descriptor 2).

   With these facilities, the proper way to write an error message then
becomes:

     print "Serious error detected!" > "/dev/stderr"

   Note the use of quotes around the file name.  Like with any other
redirection, the value must be a string.  It is a common error to omit
the quotes, which leads to confusing results.

   'gawk' does not treat these file names as special when in
POSIX-compatibility mode.  However, because BWK 'awk' supports them,
'gawk' does support them even when invoked with the '--traditional'
option (*note Options::).

   ---------- Footnotes ----------

   (1) The "tty" in '/dev/tty' stands for "Teletype," a serial terminal.


File: gawk.info,  Node: Special Files,  Next: Close Files And Pipes,  Prev: Special FD,  Up: Printing

5.8 Special File names in 'gawk'
================================

Besides access to standard input, standard output, and standard error,
'gawk' provides access to any open file descriptor.  Additionally, there
are special file names reserved for TCP/IP networking.

* Menu:

* Other Inherited Files::       Accessing other open files with
                                'gawk'.
* Special Network::             Special files for network communications.
* Special Caveats::             Things to watch out for.


File: gawk.info,  Node: Other Inherited Files,  Next: Special Network,  Up: Special Files

5.8.1 Accessing Other Open Files with 'gawk'
--------------------------------------------

Besides the '/dev/stdin', '/dev/stdout', and '/dev/stderr' special file
names mentioned earlier, 'gawk' provides syntax for accessing any other
inherited open file:

'/dev/fd/N'
     The file associated with file descriptor N.  Such a file must be
     opened by the program initiating the 'awk' execution (typically the
     shell).  Unless special pains are taken in the shell from which
     'gawk' is invoked, only descriptors 0, 1, and 2 are available.

   The file names '/dev/stdin', '/dev/stdout', and '/dev/stderr' are
essentially aliases for '/dev/fd/0', '/dev/fd/1', and '/dev/fd/2',
respectively.  However, those names are more self-explanatory.

   Note that using 'close()' on a file name of the form '"/dev/fd/N"',
for file descriptor numbers above two, does actually close the given
file descriptor.


File: gawk.info,  Node: Special Network,  Next: Special Caveats,  Prev: Other Inherited Files,  Up: Special Files

5.8.2 Special Files for Network Communications
----------------------------------------------

'gawk' programs can open a two-way TCP/IP connection, acting as either a
client or a server.  This is done using a special file name of the form:

     /NET-TYPE/PROTOCOL/LOCAL-PORT/REMOTE-HOST/REMOTE-PORT

   The NET-TYPE is one of 'inet', 'inet4', or 'inet6'.  The PROTOCOL is
one of 'tcp' or 'udp', and the other fields represent the other
essential pieces of information for making a networking connection.
These file names are used with the '|&' operator for communicating with
a coprocess (*note Two-way I/O::).  This is an advanced feature,
mentioned here only for completeness.  Full discussion is delayed until
*note TCP/IP Networking::.


File: gawk.info,  Node: Special Caveats,  Prev: Special Network,  Up: Special Files

5.8.3 Special File name Caveats
-------------------------------

Here are some things to bear in mind when using the special file names
that 'gawk' provides:

   * Recognition of the file names for the three standard preopened
     files is disabled only in POSIX mode.

   * Recognition of the other special file names is disabled if 'gawk'
     is in compatibility mode (either '--traditional' or '--posix';
     *note Options::).

   * 'gawk' _always_ interprets these special file names.  For example,
     using '/dev/fd/4' for output actually writes on file descriptor 4,
     and not on a new file descriptor that is 'dup()'ed from file
     descriptor 4.  Most of the time this does not matter; however, it
     is important to _not_ close any of the files related to file
     descriptors 0, 1, and 2.  Doing so results in unpredictable
     behavior.


File: gawk.info,  Node: Close Files And Pipes,  Next: Nonfatal,  Prev: Special Files,  Up: Printing

5.9 Closing Input and Output Redirections
=========================================

If the same file name or the same shell command is used with 'getline'
more than once during the execution of an 'awk' program (*note
Getline::), the file is opened (or the command is executed) the first
time only.  At that time, the first record of input is read from that
file or command.  The next time the same file or command is used with
'getline', another record is read from it, and so on.

   Similarly, when a file or pipe is opened for output, 'awk' remembers
the file name or command associated with it, and subsequent writes to
the same file or command are appended to the previous writes.  The file
or pipe stays open until 'awk' exits.

   This implies that special steps are necessary in order to read the
same file again from the beginning, or to rerun a shell command (rather
than reading more output from the same command).  The 'close()' function
makes these things possible:

     close(FILENAME)

or:

     close(COMMAND)

   The argument FILENAME or COMMAND can be any expression.  Its value
must _exactly_ match the string that was used to open the file or start
the command (spaces and other "irrelevant" characters included).  For
example, if you open a pipe with this:

     "sort -r names" | getline foo

then you must close it with this:

     close("sort -r names")

   Once this function call is executed, the next 'getline' from that
file or command, or the next 'print' or 'printf' to that file or
command, reopens the file or reruns the command.  Because the expression
that you use to close a file or pipeline must exactly match the
expression used to open the file or run the command, it is good practice
to use a variable to store the file name or command.  The previous
example becomes the following:

     sortcom = "sort -r names"
     sortcom | getline foo
     ...
     close(sortcom)

This helps avoid hard-to-find typographical errors in your 'awk'
programs.  Here are some of the reasons for closing an output file:

   * To write a file and read it back later on in the same 'awk'
     program.  Close the file after writing it, then begin reading it
     with 'getline'.

   * To write numerous files, successively, in the same 'awk' program.
     If the files aren't closed, eventually 'awk' may exceed a system
     limit on the number of open files in one process.  It is best to
     close each one when the program has finished writing it.

   * To make a command finish.  When output is redirected through a
     pipe, the command reading the pipe normally continues to try to
     read input as long as the pipe is open.  Often this means the
     command cannot really do its work until the pipe is closed.  For
     example, if output is redirected to the 'mail' program, the message
     is not actually sent until the pipe is closed.

   * To run the same program a second time, with the same arguments.
     This is not the same thing as giving more input to the first run!

     For example, suppose a program pipes output to the 'mail' program.
     If it outputs several lines redirected to this pipe without closing
     it, they make a single message of several lines.  By contrast, if
     the program closes the pipe after each line of output, then each
     line makes a separate message.

   If you use more files than the system allows you to have open, 'gawk'
attempts to multiplex the available open files among your data files.
'gawk''s ability to do this depends upon the facilities of your
operating system, so it may not always work.  It is therefore both good
practice and good portability advice to always use 'close()' on your
files when you are done with them.  In fact, if you are using a lot of
pipes, it is essential that you close commands when done.  For example,
consider something like this:

     {
         ...
         command = ("grep " $1 " /some/file | my_prog -q " $3)
         while ((command | getline) > 0) {
             PROCESS OUTPUT OF command
         }
         # need close(command) here
     }

   This example creates a new pipeline based on data in _each_ record.
Without the call to 'close()' indicated in the comment, 'awk' creates
child processes to run the commands, until it eventually runs out of
file descriptors for more pipelines.

   Even though each command has finished (as indicated by the
end-of-file return status from 'getline'), the child process is not
terminated;(1) more importantly, the file descriptor for the pipe is not
closed and released until 'close()' is called or 'awk' exits.

   'close()' silently does nothing if given an argument that does not
represent a file, pipe, or coprocess that was opened with a redirection.
In such a case, it returns a negative value, indicating an error.  In
addition, 'gawk' sets 'ERRNO' to a string indicating the error.

   Note also that 'close(FILENAME)' has no "magic" effects on the
implicit loop that reads through the files named on the command line.
It is, more likely, a close of a file that was never opened with a
redirection, so 'awk' silently does nothing, except return a negative
value.

   When using the '|&' operator to communicate with a coprocess, it is
occasionally useful to be able to close one end of the two-way pipe
without closing the other.  This is done by supplying a second argument
to 'close()'.  As in any other call to 'close()', the first argument is
the name of the command or special file used to start the coprocess.
The second argument should be a string, with either of the values '"to"'
or '"from"'.  Case does not matter.  As this is an advanced feature,
discussion is delayed until *note Two-way I/O::, which describes it in
more detail and gives an example.

                    Using 'close()''s Return Value

   In many older versions of Unix 'awk', the 'close()' function is
actually a statement.  (d.c.)  It is a syntax error to try and use the
return value from 'close()':

     command = "..."
     command | getline info
     retval = close(command)  # syntax error in many Unix awks

   'gawk' treats 'close()' as a function.  The return value is -1 if the
argument names something that was never opened with a redirection, or if
there is a system problem closing the file or process.  In these cases,
'gawk' sets the predefined variable 'ERRNO' to a string describing the
problem.

   In 'gawk', starting with version 4.2, when closing a pipe or
coprocess (input or output), the return value is the exit status of the
command, as described in *note Table 5.1:
table-close-pipe-return-values.(2)  Otherwise, it is the return value
from the system's 'close()' or 'fclose()' C functions when closing input
or output files, respectively.  This value is zero if the close
succeeds, or -1 if it fails.

Situation                     Return value from 'close()'
--------------------------------------------------------------------------
Normal exit of command        Command's exit status
Death by signal of command    256 + number of murderous signal
Death by signal of command    512 + number of murderous signal
with core dump
Some kind of error            -1

Table 5.1: Return values from 'close()' of a pipe

   The POSIX standard is very vague; it says that 'close()' returns zero
on success and a nonzero value otherwise.  In general, different
implementations vary in what they report when closing pipes; thus, the
return value cannot be used portably.  (d.c.)  In POSIX mode (*note
Options::), 'gawk' just returns zero when closing a pipe.

   ---------- Footnotes ----------

   (1) The technical terminology is rather morbid.  The finished child
is called a "zombie," and cleaning up after it is referred to as
"reaping."

   (2) Prior to version 4.2, the return value from closing a pipe or
co-process was the full 16-bit exit value as defined by the 'wait()'
system call.


File: gawk.info,  Node: Nonfatal,  Next: Output Summary,  Prev: Close Files And Pipes,  Up: Printing

5.10 Enabling Nonfatal Output
=============================

This minor node describes a 'gawk'-specific feature.

   In standard 'awk', output with 'print' or 'printf' to a nonexistent
file, or some other I/O error (such as filling up the disk) is a fatal
error.

     $ gawk 'BEGIN { print "hi" > "/no/such/file" }'
     error-> gawk: cmd. line:1: fatal: can't redirect to `/no/such/file' (No such file or directory)

   'gawk' makes it possible to detect that an error has occurred,
allowing you to possibly recover from the error, or at least print an
error message of your choosing before exiting.  You can do this in one
of two ways:

   * For all output files, by assigning any value to
     'PROCINFO["NONFATAL"]'.

   * On a per-file basis, by assigning any value to 'PROCINFO[FILENAME,
     "NONFATAL"]'.  Here, FILENAME is the name of the file to which you
     wish output to be nonfatal.

   Once you have enabled nonfatal output, you must check 'ERRNO' after
every relevant 'print' or 'printf' statement to see if something went
wrong.  It is also a good idea to initialize 'ERRNO' to zero before
attempting the output.  For example:

     $ gawk '
     > BEGIN {
     >     PROCINFO["NONFATAL"] = 1
     >     ERRNO = 0
     >     print "hi" > "/no/such/file"
     >     if (ERRNO) {
     >         print("Output failed:", ERRNO) > "/dev/stderr"
     >         exit 1
     >     }
     > }'
     error-> Output failed: No such file or directory

   Here, 'gawk' did not produce a fatal error; instead it let the 'awk'
program code detect the problem and handle it.

   This mechanism works also for standard output and standard error.
For standard output, you may use 'PROCINFO["-", "NONFATAL"]' or
'PROCINFO["/dev/stdout", "NONFATAL"]'.  For standard error, use
'PROCINFO["/dev/stderr", "NONFATAL"]'.

   When attempting to open a TCP/IP socket (*note TCP/IP Networking::),
'gawk' tries multiple times.  The 'GAWK_SOCK_RETRIES' environment
variable (*note Other Environment Variables::) allows you to override
'gawk''s builtin default number of attempts.  However, once nonfatal I/O
is enabled for a given socket, 'gawk' only retries once, relying on
'awk'-level code to notice that there was a problem.


File: gawk.info,  Node: Output Summary,  Next: Output Exercises,  Prev: Nonfatal,  Up: Printing

5.11 Summary
============

   * The 'print' statement prints comma-separated expressions.  Each
     expression is separated by the value of 'OFS' and terminated by the
     value of 'ORS'.  'OFMT' provides the conversion format for numeric
     values for the 'print' statement.

   * The 'printf' statement provides finer-grained control over output,
     with format-control letters for different data types and various
     flags that modify the behavior of the format-control letters.

   * Output from both 'print' and 'printf' may be redirected to files,
     pipes, and coprocesses.

   * 'gawk' provides special file names for access to standard input,
     output, and error, and for network communications.

   * Use 'close()' to close open file, pipe, and coprocess redirections.
     For coprocesses, it is possible to close only one direction of the
     communications.

   * Normally errors with 'print' or 'printf' are fatal.  'gawk' lets
     you make output errors be nonfatal either for all files or on a
     per-file basis.  You must then check for errors after every
     relevant output statement.


File: gawk.info,  Node: Output Exercises,  Prev: Output Summary,  Up: Printing

5.12 Exercises
==============

  1. Rewrite the program:

          awk 'BEGIN { print "Month Crates"
                       print "----- ------" }
                     { print $1, "     ", $2 }' inventory-shipped

     from *note Output Separators::, by using a new value of 'OFS'.

  2. Use the 'printf' statement to line up the headings and table data
     for the 'inventory-shipped' example that was covered in *note
     Print::.

  3. What happens if you forget the double quotes when redirecting
     output, as follows:

          BEGIN { print "Serious error detected!" > /dev/stderr }


File: gawk.info,  Node: Expressions,  Next: Patterns and Actions,  Prev: Printing,  Up: Top

6 Expressions
*************

Expressions are the basic building blocks of 'awk' patterns and actions.
An expression evaluates to a value that you can print, test, or pass to
a function.  Additionally, an expression can assign a new value to a
variable or a field by using an assignment operator.

   An expression can serve as a pattern or action statement on its own.
Most other kinds of statements contain one or more expressions that
specify the data on which to operate.  As in other languages,
expressions in 'awk' can include variables, array references, constants,
and function calls, as well as combinations of these with various
operators.

* Menu:

* Values::                      Constants, Variables, and Regular Expressions.
* All Operators::               'gawk''s operators.
* Truth Values and Conditions:: Testing for true and false.
* Function Calls::              A function call is an expression.
* Precedence::                  How various operators nest.
* Locales::                     How the locale affects things.
* Expressions Summary::         Expressions summary.


File: gawk.info,  Node: Values,  Next: All Operators,  Up: Expressions

6.1 Constants, Variables, and Conversions
=========================================

Expressions are built up from values and the operations performed upon
them.  This minor node describes the elementary objects that provide the
values used in expressions.

* Menu:

* Constants::                   String, numeric and regexp constants.
* Using Constant Regexps::      When and how to use a regexp constant.
* Variables::                   Variables give names to values for later use.
* Conversion::                  The conversion of strings to numbers and vice
                                versa.


File: gawk.info,  Node: Constants,  Next: Using Constant Regexps,  Up: Values

6.1.1 Constant Expressions
--------------------------

The simplest type of expression is the "constant", which always has the
same value.  There are three types of constants: numeric, string, and
regular expression.

   Each is used in the appropriate context when you need a data value
that isn't going to change.  Numeric constants can have different forms,
but are internally stored in an identical manner.

* Menu:

* Scalar Constants::            Numeric and string constants.
* Nondecimal-numbers::          What are octal and hex numbers.
* Regexp Constants::            Regular Expression constants.


File: gawk.info,  Node: Scalar Constants,  Next: Nondecimal-numbers,  Up: Constants

6.1.1.1 Numeric and String Constants
....................................

A "numeric constant" stands for a number.  This number can be an
integer, a decimal fraction, or a number in scientific (exponential)
notation.(1)  Here are some examples of numeric constants that all have
the same value:

     105
     1.05e+2
     1050e-1

   A "string constant" consists of a sequence of characters enclosed in
double quotation marks.  For example:

     "parrot"

represents the string whose contents are 'parrot'.  Strings in 'gawk'
can be of any length, and they can contain any of the possible eight-bit
ASCII characters, including ASCII NUL (character code zero).  Other
'awk' implementations may have difficulty with some character codes.

   ---------- Footnotes ----------

   (1) The internal representation of all numbers, including integers,
uses double-precision floating-point numbers.  On most modern systems,
these are in IEEE 754 standard format.  *Note Arbitrary Precision
Arithmetic::, for much more information.


File: gawk.info,  Node: Nondecimal-numbers,  Next: Regexp Constants,  Prev: Scalar Constants,  Up: Constants

6.1.1.2 Octal and Hexadecimal Numbers
.....................................

In 'awk', all numbers are in decimal (i.e., base 10).  Many other
programming languages allow you to specify numbers in other bases, often
octal (base 8) and hexadecimal (base 16).  In octal, the numbers go 0,
1, 2, 3, 4, 5, 6, 7, 10, 11, 12, and so on.  Just as '11' in decimal is
1 times 10 plus 1, so '11' in octal is 1 times 8 plus 1.  This equals 9
in decimal.  In hexadecimal, there are 16 digits.  Because the everyday
decimal number system only has ten digits ('0'-'9'), the letters 'a'
through 'f' are used to represent the rest.  (Case in the letters is
usually irrelevant; hexadecimal 'a' and 'A' have the same value.)  Thus,
'11' in hexadecimal is 1 times 16 plus 1, which equals 17 in decimal.

   Just by looking at plain '11', you can't tell what base it's in.  So,
in C, C++, and other languages derived from C, there is a special
notation to signify the base.  Octal numbers start with a leading '0',
and hexadecimal numbers start with a leading '0x' or '0X':

'11'
     Decimal value 11

'011'
     Octal 11, decimal value 9

'0x11'
     Hexadecimal 11, decimal value 17

   This example shows the difference:

     $ gawk 'BEGIN { printf "%d, %d, %d\n", 011, 11, 0x11 }'
     -| 9, 11, 17

   Being able to use octal and hexadecimal constants in your programs is
most useful when working with data that cannot be represented
conveniently as characters or as regular numbers, such as binary data of
various sorts.

   'gawk' allows the use of octal and hexadecimal constants in your
program text.  However, such numbers in the input data are not treated
differently; doing so by default would break old programs.  (If you
really need to do this, use the '--non-decimal-data' command-line
option; *note Nondecimal Data::.)  If you have octal or hexadecimal
data, you can use the 'strtonum()' function (*note String Functions::)
to convert the data into a number.  Most of the time, you will want to
use octal or hexadecimal constants when working with the built-in
bit-manipulation functions; see *note Bitwise Functions:: for more
information.

   Unlike in some early C implementations, '8' and '9' are not valid in
octal constants.  For example, 'gawk' treats '018' as decimal 18:

     $ gawk 'BEGIN { print "021 is", 021 ; print 018 }'
     -| 021 is 17
     -| 18

   Octal and hexadecimal source code constants are a 'gawk' extension.
If 'gawk' is in compatibility mode (*note Options::), they are not
available.

              A Constant's Base Does Not Affect Its Value

   Once a numeric constant has been converted internally into a number,
'gawk' no longer remembers what the original form of the constant was;
the internal value is always used.  This has particular consequences for
conversion of numbers to strings:

     $ gawk 'BEGIN { printf "0x11 is <%s>\n", 0x11 }'
     -| 0x11 is <17>


File: gawk.info,  Node: Regexp Constants,  Prev: Nondecimal-numbers,  Up: Constants

6.1.1.3 Regular Expression Constants
....................................

A "regexp constant" is a regular expression description enclosed in
slashes, such as '/^beginning and end$/'.  Most regexps used in 'awk'
programs are constant, but the '~' and '!~' matching operators can also
match computed or dynamic regexps (which are typically just ordinary
strings or variables that contain a regexp, but could be more complex
expressions).


File: gawk.info,  Node: Using Constant Regexps,  Next: Variables,  Prev: Constants,  Up: Values

6.1.2 Using Regular Expression Constants
----------------------------------------

When used on the righthand side of the '~' or '!~' operators, a regexp
constant merely stands for the regexp that is to be matched.  However,
regexp constants (such as '/foo/') may be used like simple expressions.
When a regexp constant appears by itself, it has the same meaning as if
it appeared in a pattern (i.e., '($0 ~ /foo/)').  (d.c.)  *Note
Expression Patterns::.  This means that the following two code segments:

     if ($0 ~ /barfly/ || $0 ~ /camelot/)
         print "found"

and:

     if (/barfly/ || /camelot/)
         print "found"

are exactly equivalent.  One rather bizarre consequence of this rule is
that the following Boolean expression is valid, but does not do what its
author probably intended:

     # Note that /foo/ is on the left of the ~
     if (/foo/ ~ $1) print "found foo"

This code is "obviously" testing '$1' for a match against the regexp
'/foo/'.  But in fact, the expression '/foo/ ~ $1' really means '($0 ~
/foo/) ~ $1'.  In other words, first match the input record against the
regexp '/foo/'.  The result is either zero or one, depending upon the
success or failure of the match.  That result is then matched against
the first field in the record.  Because it is unlikely that you would
ever really want to make this kind of test, 'gawk' issues a warning when
it sees this construct in a program.  Another consequence of this rule
is that the assignment statement:

     matches = /foo/

assigns either zero or one to the variable 'matches', depending upon the
contents of the current input record.

   Constant regular expressions are also used as the first argument for
the 'gensub()', 'sub()', and 'gsub()' functions, as the second argument
of the 'match()' function, and as the third argument of the 'split()'
and 'patsplit()' functions (*note String Functions::).  Modern
implementations of 'awk', including 'gawk', allow the third argument of
'split()' to be a regexp constant, but some older implementations do
not.  (d.c.)  Because some built-in functions accept regexp constants as
arguments, confusion can arise when attempting to use regexp constants
as arguments to user-defined functions (*note User-defined::).  For
example:

     function mysub(pat, repl, str, global)
     {
         if (global)
             gsub(pat, repl, str)
         else
             sub(pat, repl, str)
         return str
     }

     {
         ...
         text = "hi! hi yourself!"
         mysub(/hi/, "howdy", text, 1)
         ...
     }

   In this example, the programmer wants to pass a regexp constant to
the user-defined function 'mysub()', which in turn passes it on to
either 'sub()' or 'gsub()'.  However, what really happens is that the
'pat' parameter is assigned a value of either one or zero, depending
upon whether or not '$0' matches '/hi/'.  'gawk' issues a warning when
it sees a regexp constant used as a parameter to a user-defined
function, because passing a truth value in this way is probably not what
was intended.


File: gawk.info,  Node: Variables,  Next: Conversion,  Prev: Using Constant Regexps,  Up: Values

6.1.3 Variables
---------------

"Variables" are ways of storing values at one point in your program for
use later in another part of your program.  They can be manipulated
entirely within the program text, and they can also be assigned values
on the 'awk' command line.

* Menu:

* Using Variables::             Using variables in your programs.
* Assignment Options::          Setting variables on the command line and a
                                summary of command-line syntax. This is an
                                advanced method of input.


File: gawk.info,  Node: Using Variables,  Next: Assignment Options,  Up: Variables

6.1.3.1 Using Variables in a Program
....................................

Variables let you give names to values and refer to them later.
Variables have already been used in many of the examples.  The name of a
variable must be a sequence of letters, digits, or underscores, and it
may not begin with a digit.  Here, a "letter" is any one of the 52
upper- and lowercase English letters.  Other characters that may be
defined as letters in non-English locales are not valid in variable
names.  Case is significant in variable names; 'a' and 'A' are distinct
variables.

   A variable name is a valid expression by itself; it represents the
variable's current value.  Variables are given new values with
"assignment operators", "increment operators", and "decrement operators"
(*note Assignment Ops::).  In addition, the 'sub()' and 'gsub()'
functions can change a variable's value, and the 'match()', 'split()',
and 'patsplit()' functions can change the contents of their array
parameters (*note String Functions::).

   A few variables have special built-in meanings, such as 'FS' (the
field separator) and 'NF' (the number of fields in the current input
record).  *Note Built-in Variables:: for a list of the predefined
variables.  These predefined variables can be used and assigned just
like all other variables, but their values are also used or changed
automatically by 'awk'.  All predefined variables' names are entirely
uppercase.

   Variables in 'awk' can be assigned either numeric or string values.
The kind of value a variable holds can change over the life of a
program.  By default, variables are initialized to the empty string,
which is zero if converted to a number.  There is no need to explicitly
initialize a variable in 'awk', which is what you would do in C and in
most other traditional languages.


File: gawk.info,  Node: Assignment Options,  Prev: Using Variables,  Up: Variables

6.1.3.2 Assigning Variables on the Command Line
...............................................

Any 'awk' variable can be set by including a "variable assignment" among
the arguments on the command line when 'awk' is invoked (*note Other
Arguments::).  Such an assignment has the following form:

     VARIABLE=TEXT

With it, a variable is set either at the beginning of the 'awk' run or
in between input files.  When the assignment is preceded with the '-v'
option, as in the following:

     -v VARIABLE=TEXT

the variable is set at the very beginning, even before the 'BEGIN' rules
execute.  The '-v' option and its assignment must precede all the file
name arguments, as well as the program text.  (*Note Options:: for more
information about the '-v' option.)  Otherwise, the variable assignment
is performed at a time determined by its position among the input file
arguments--after the processing of the preceding input file argument.
For example:

     awk '{ print $n }' n=4 inventory-shipped n=2 mail-list

prints the value of field number 'n' for all input records.  Before the
first file is read, the command line sets the variable 'n' equal to
four.  This causes the fourth field to be printed in lines from
'inventory-shipped'.  After the first file has finished, but before the
second file is started, 'n' is set to two, so that the second field is
printed in lines from 'mail-list':

     $ awk '{ print $n }' n=4 inventory-shipped n=2 mail-list
     -| 15
     -| 24
     ...
     -| 555-5553
     -| 555-3412
     ...

   Command-line arguments are made available for explicit examination by
the 'awk' program in the 'ARGV' array (*note ARGC and ARGV::).  'awk'
processes the values of command-line assignments for escape sequences
(*note Escape Sequences::).  (d.c.)


File: gawk.info,  Node: Conversion,  Prev: Variables,  Up: Values

6.1.4 Conversion of Strings and Numbers
---------------------------------------

Number-to-string and string-to-number conversion are generally
straightforward.  There can be subtleties to be aware of; this minor
node discusses this important facet of 'awk'.

* Menu:

* Strings And Numbers::         How 'awk' Converts Between Strings And
                                Numbers.
* Locale influences conversions:: How the locale may affect conversions.


File: gawk.info,  Node: Strings And Numbers,  Next: Locale influences conversions,  Up: Conversion

6.1.4.1 How 'awk' Converts Between Strings and Numbers
......................................................

Strings are converted to numbers and numbers are converted to strings,
if the context of the 'awk' program demands it.  For example, if the
value of either 'foo' or 'bar' in the expression 'foo + bar' happens to
be a string, it is converted to a number before the addition is
performed.  If numeric values appear in string concatenation, they are
converted to strings.  Consider the following:

     two = 2; three = 3
     print (two three) + 4

This prints the (numeric) value 27.  The numeric values of the variables
'two' and 'three' are converted to strings and concatenated together.
The resulting string is converted back to the number 23, to which 4 is
then added.

   If, for some reason, you need to force a number to be converted to a
string, concatenate that number with the empty string, '""'.  To force a
string to be converted to a number, add zero to that string.  A string
is converted to a number by interpreting any numeric prefix of the
string as numerals: '"2.5"' converts to 2.5, '"1e3"' converts to 1,000,
and '"25fix"' has a numeric value of 25.  Strings that can't be
interpreted as valid numbers convert to zero.

   The exact manner in which numbers are converted into strings is
controlled by the 'awk' predefined variable 'CONVFMT' (*note Built-in
Variables::).  Numbers are converted using the 'sprintf()' function with
'CONVFMT' as the format specifier (*note String Functions::).

   'CONVFMT''s default value is '"%.6g"', which creates a value with at
most six significant digits.  For some applications, you might want to
change it to specify more precision.  On most modern machines, 17 digits
is usually enough to capture a floating-point number's value exactly.(1)

   Strange results can occur if you set 'CONVFMT' to a string that
doesn't tell 'sprintf()' how to format floating-point numbers in a
useful way.  For example, if you forget the '%' in the format, 'awk'
converts all numbers to the same constant string.

   As a special case, if a number is an integer, then the result of
converting it to a string is _always_ an integer, no matter what the
value of 'CONVFMT' may be.  Given the following code fragment:

     CONVFMT = "%2.2f"
     a = 12
     b = a ""

'b' has the value '"12"', not '"12.00"'.  (d.c.)

           Pre-POSIX 'awk' Used 'OFMT' for String Conversion

   Prior to the POSIX standard, 'awk' used the value of 'OFMT' for
converting numbers to strings.  'OFMT' specifies the output format to
use when printing numbers with 'print'.  'CONVFMT' was introduced in
order to separate the semantics of conversion from the semantics of
printing.  Both 'CONVFMT' and 'OFMT' have the same default value:
'"%.6g"'.  In the vast majority of cases, old 'awk' programs do not
change their behavior.  *Note Print:: for more information on the
'print' statement.

   ---------- Footnotes ----------

   (1) Pathological cases can require up to 752 digits (!), but we doubt
that you need to worry about this.


File: gawk.info,  Node: Locale influences conversions,  Prev: Strings And Numbers,  Up: Conversion

6.1.4.2 Locales Can Influence Conversion
........................................

Where you are can matter when it comes to converting between numbers and
strings.  The local character set and language--the "locale"--can affect
numeric formats.  In particular, for 'awk' programs, it affects the
decimal point character and the thousands-separator character.  The
'"C"' locale, and most English-language locales, use the period
character ('.') as the decimal point and don't have a thousands
separator.  However, many (if not most) European and non-English locales
use the comma (',') as the decimal point character.  European locales
often use either a space or a period as the thousands separator, if they
have one.

   The POSIX standard says that 'awk' always uses the period as the
decimal point when reading the 'awk' program source code, and for
command-line variable assignments (*note Other Arguments::).  However,
when interpreting input data, for 'print' and 'printf' output, and for
number-to-string conversion, the local decimal point character is used.
(d.c.)  In all cases, numbers in source code and in input data cannot
have a thousands separator.  Here are some examples indicating the
difference in behavior, on a GNU/Linux system:

     $ export POSIXLY_CORRECT=1                        Force POSIX behavior
     $ gawk 'BEGIN { printf "%g\n", 3.1415927 }'
     -| 3.14159
     $ LC_ALL=en_DK.utf-8 gawk 'BEGIN { printf "%g\n", 3.1415927 }'
     -| 3,14159
     $ echo 4,321 | gawk '{ print $1 + 1 }'
     -| 5
     $ echo 4,321 | LC_ALL=en_DK.utf-8 gawk '{ print $1 + 1 }'
     -| 5,321

The 'en_DK.utf-8' locale is for English in Denmark, where the comma acts
as the decimal point separator.  In the normal '"C"' locale, 'gawk'
treats '4,321' as 4, while in the Danish locale, it's treated as the
full number including the fractional part, 4.321.

   Some earlier versions of 'gawk' fully complied with this aspect of
the standard.  However, many users in non-English locales complained
about this behavior, because their data used a period as the decimal
point, so the default behavior was restored to use a period as the
decimal point character.  You can use the '--use-lc-numeric' option
(*note Options::) to force 'gawk' to use the locale's decimal point
character.  ('gawk' also uses the locale's decimal point character when
in POSIX mode, either via '--posix' or the 'POSIXLY_CORRECT' environment
variable, as shown previously.)

   *note Table 6.1: table-locale-affects. describes the cases in which
the locale's decimal point character is used and when a period is used.
Some of these features have not been described yet.

Feature     Default        '--posix' or
                           '--use-lc-numeric'
------------------------------------------------------------
'%'g'       Use locale     Use locale
'%g'        Use period     Use locale
Input       Use period     Use locale
'strtonum()'Use period     Use locale

Table 6.1: Locale decimal point versus a period

   Finally, modern-day formal standards and the IEEE standard
floating-point representation can have an unusual but important effect
on the way 'gawk' converts some special string values to numbers.  The
details are presented in *note POSIX Floating Point Problems::.


File: gawk.info,  Node: All Operators,  Next: Truth Values and Conditions,  Prev: Values,  Up: Expressions

6.2 Operators: Doing Something with Values
==========================================

This minor node introduces the "operators" that make use of the values
provided by constants and variables.

* Menu:

* Arithmetic Ops::              Arithmetic operations ('+', '-',
                                etc.)
* Concatenation::               Concatenating strings.
* Assignment Ops::              Changing the value of a variable or a field.
* Increment Ops::               Incrementing the numeric value of a variable.


File: gawk.info,  Node: Arithmetic Ops,  Next: Concatenation,  Up: All Operators

6.2.1 Arithmetic Operators
--------------------------

The 'awk' language uses the common arithmetic operators when evaluating
expressions.  All of these arithmetic operators follow normal precedence
rules and work as you would expect them to.

   The following example uses a file named 'grades', which contains a
list of student names as well as three test scores per student (it's a
small class):

     Pat   100 97 58
     Sandy  84 72 93
     Chris  72 92 89

This program takes the file 'grades' and prints the average of the
scores:

     $ awk '{ sum = $2 + $3 + $4 ; avg = sum / 3
     >        print $1, avg }' grades
     -| Pat 85
     -| Sandy 83
     -| Chris 84.3333

   The following list provides the arithmetic operators in 'awk', in
order from the highest precedence to the lowest:

'X ^ Y'
'X ** Y'
     Exponentiation; X raised to the Y power.  '2 ^ 3' has the value
     eight; the character sequence '**' is equivalent to '^'.  (c.e.)

'- X'
     Negation.

'+ X'
     Unary plus; the expression is converted to a number.

'X * Y'
     Multiplication.

'X / Y'
     Division; because all numbers in 'awk' are floating-point numbers,
     the result is _not_ rounded to an integer--'3 / 4' has the value
     0.75.  (It is a common mistake, especially for C programmers, to
     forget that _all_ numbers in 'awk' are floating point, and that
     division of integer-looking constants produces a real number, not
     an integer.)

'X % Y'
     Remainder; further discussion is provided in the text, just after
     this list.

'X + Y'
     Addition.

'X - Y'
     Subtraction.

   Unary plus and minus have the same precedence, the multiplication
operators all have the same precedence, and addition and subtraction
have the same precedence.

   When computing the remainder of 'X % Y', the quotient is rounded
toward zero to an integer and multiplied by Y.  This result is
subtracted from X; this operation is sometimes known as "trunc-mod."
The following relation always holds:

     b * int(a / b) + (a % b) == a

   One possibly undesirable effect of this definition of remainder is
that 'X % Y' is negative if X is negative.  Thus:

     -17 % 8 = -1

   In other 'awk' implementations, the signedness of the remainder may
be machine-dependent.

     NOTE: The POSIX standard only specifies the use of '^' for
     exponentiation.  For maximum portability, do not use the '**'
     operator.


File: gawk.info,  Node: Concatenation,  Next: Assignment Ops,  Prev: Arithmetic Ops,  Up: All Operators

6.2.2 String Concatenation
--------------------------

     It seemed like a good idea at the time.
                         -- _Brian Kernighan_

   There is only one string operation: concatenation.  It does not have
a specific operator to represent it.  Instead, concatenation is
performed by writing expressions next to one another, with no operator.
For example:

     $ awk '{ print "Field number one: " $1 }' mail-list
     -| Field number one: Amelia
     -| Field number one: Anthony
     ...

   Without the space in the string constant after the ':', the line runs
together.  For example:

     $ awk '{ print "Field number one:" $1 }' mail-list
     -| Field number one:Amelia
     -| Field number one:Anthony
     ...

   Because string concatenation does not have an explicit operator, it
is often necessary to ensure that it happens at the right time by using
parentheses to enclose the items to concatenate.  For example, you might
expect that the following code fragment concatenates 'file' and 'name':

     file = "file"
     name = "name"
     print "something meaningful" > file name

This produces a syntax error with some versions of Unix 'awk'.(1)  It is
necessary to use the following:

     print "something meaningful" > (file name)

   Parentheses should be used around concatenation in all but the most
common contexts, such as on the righthand side of '='.  Be careful about
the kinds of expressions used in string concatenation.  In particular,
the order of evaluation of expressions used for concatenation is
undefined in the 'awk' language.  Consider this example:

     BEGIN {
         a = "don't"
         print (a " " (a = "panic"))
     }

It is not defined whether the second assignment to 'a' happens before or
after the value of 'a' is retrieved for producing the concatenated
value.  The result could be either 'don't panic', or 'panic panic'.

   The precedence of concatenation, when mixed with other operators, is
often counter-intuitive.  Consider this example:

     $ awk 'BEGIN { print -12 " " -24 }'
     -| -12-24

   This "obviously" is concatenating -12, a space, and -24.  But where
did the space disappear to?  The answer lies in the combination of
operator precedences and 'awk''s automatic conversion rules.  To get the
desired result, write the program this way:

     $ awk 'BEGIN { print -12 " " (-24) }'
     -| -12 -24

   This forces 'awk' to treat the '-' on the '-24' as unary.  Otherwise,
it's parsed as follows:

         -12 ('" "' - 24)
     => -12 (0 - 24)
     => -12 (-24)
     => -12-24

   As mentioned earlier, when mixing concatenation with other operators,
_parenthesize_.  Otherwise, you're never quite sure what you'll get.

   ---------- Footnotes ----------

   (1) It happens that BWK 'awk', 'gawk', and 'mawk' all "get it right,"
but you should not rely on this.


File: gawk.info,  Node: Assignment Ops,  Next: Increment Ops,  Prev: Concatenation,  Up: All Operators

6.2.3 Assignment Expressions
----------------------------

An "assignment" is an expression that stores a (usually different) value
into a variable.  For example, let's assign the value one to the
variable 'z':

     z = 1

   After this expression is executed, the variable 'z' has the value
one.  Whatever old value 'z' had before the assignment is forgotten.

   Assignments can also store string values.  For example, the following
stores the value '"this food is good"' in the variable 'message':

     thing = "food"
     predicate = "good"
     message = "this " thing " is " predicate

This also illustrates string concatenation.  The '=' sign is called an
"assignment operator".  It is the simplest assignment operator because
the value of the righthand operand is stored unchanged.  Most operators
(addition, concatenation, and so on) have no effect except to compute a
value.  If the value isn't used, there's no reason to use the operator.
An assignment operator is different; it does produce a value, but even
if you ignore it, the assignment still makes itself felt through the
alteration of the variable.  We call this a "side effect".

   The lefthand operand of an assignment need not be a variable (*note
Variables::); it can also be a field (*note Changing Fields::) or an
array element (*note Arrays::).  These are all called "lvalues", which
means they can appear on the lefthand side of an assignment operator.
The righthand operand may be any expression; it produces the new value
that the assignment stores in the specified variable, field, or array
element.  (Such values are called "rvalues".)

   It is important to note that variables do _not_ have permanent types.
A variable's type is simply the type of whatever value was last assigned
to it.  In the following program fragment, the variable 'foo' has a
numeric value at first, and a string value later on:

     foo = 1
     print foo
     foo = "bar"
     print foo

When the second assignment gives 'foo' a string value, the fact that it
previously had a numeric value is forgotten.

   String values that do not begin with a digit have a numeric value of
zero.  After executing the following code, the value of 'foo' is five:

     foo = "a string"
     foo = foo + 5

     NOTE: Using a variable as a number and then later as a string can
     be confusing and is poor programming style.  The previous two
     examples illustrate how 'awk' works, _not_ how you should write
     your programs!

   An assignment is an expression, so it has a value--the same value
that is assigned.  Thus, 'z = 1' is an expression with the value one.
One consequence of this is that you can write multiple assignments
together, such as:

     x = y = z = 5

This example stores the value five in all three variables ('x', 'y', and
'z').  It does so because the value of 'z = 5', which is five, is stored
into 'y' and then the value of 'y = z = 5', which is five, is stored
into 'x'.

   Assignments may be used anywhere an expression is called for.  For
example, it is valid to write 'x != (y = 1)' to set 'y' to one, and then
test whether 'x' equals one.  But this style tends to make programs hard
to read; such nesting of assignments should be avoided, except perhaps
in a one-shot program.

   Aside from '=', there are several other assignment operators that do
arithmetic with the old value of the variable.  For example, the
operator '+=' computes a new value by adding the righthand value to the
old value of the variable.  Thus, the following assignment adds five to
the value of 'foo':

     foo += 5

This is equivalent to the following:

     foo = foo + 5

Use whichever makes the meaning of your program clearer.

   There are situations where using '+=' (or any assignment operator) is
_not_ the same as simply repeating the lefthand operand in the righthand
expression.  For example:

     # Thanks to Pat Rankin for this example
     BEGIN  {
         foo[rand()] += 5
         for (x in foo)
            print x, foo[x]

         bar[rand()] = bar[rand()] + 5
         for (x in bar)
            print x, bar[x]
     }

The indices of 'bar' are practically guaranteed to be different, because
'rand()' returns different values each time it is called.  (Arrays and
the 'rand()' function haven't been covered yet.  *Note Arrays::, and
*note Numeric Functions:: for more information.)  This example
illustrates an important fact about assignment operators: the lefthand
expression is only evaluated _once_.

   It is up to the implementation as to which expression is evaluated
first, the lefthand or the righthand.  Consider this example:

     i = 1
     a[i += 2] = i + 1

The value of 'a[3]' could be either two or four.

   *note Table 6.2: table-assign-ops. lists the arithmetic assignment
operators.  In each case, the righthand operand is an expression whose
value is converted to a number.

Operator               Effect
--------------------------------------------------------------------------
LVALUE '+='            Add INCREMENT to the value of LVALUE.
INCREMENT
LVALUE '-='            Subtract DECREMENT from the value of LVALUE.
DECREMENT
LVALUE '*='            Multiply the value of LVALUE by COEFFICIENT.
COEFFICIENT
LVALUE '/=' DIVISOR    Divide the value of LVALUE by DIVISOR.
LVALUE '%=' MODULUS    Set LVALUE to its remainder by MODULUS.
LVALUE '^=' POWER      Raise LVALUE to the power POWER.
LVALUE '**=' POWER     Raise LVALUE to the power POWER.  (c.e.)

Table 6.2: Arithmetic assignment operators

     NOTE: Only the '^=' operator is specified by POSIX. For maximum
     portability, do not use the '**=' operator.

      Syntactic Ambiguities Between '/=' and Regular Expressions

   There is a syntactic ambiguity between the '/=' assignment operator
and regexp constants whose first character is an '='.  (d.c.)  This is
most notable in some commercial 'awk' versions.  For example:

     $ awk /==/ /dev/null
     error-> awk: syntax error at source line 1
     error->  context is
     error->         >>> /= <<<
     error-> awk: bailing out at source line 1

A workaround is:

     awk '/[=]=/' /dev/null

   'gawk' does not have this problem; BWK 'awk' and 'mawk' also do not.


File: gawk.info,  Node: Increment Ops,  Prev: Assignment Ops,  Up: All Operators

6.2.4 Increment and Decrement Operators
---------------------------------------

"Increment" and "decrement operators" increase or decrease the value of
a variable by one.  An assignment operator can do the same thing, so the
increment operators add no power to the 'awk' language; however, they
are convenient abbreviations for very common operations.

   The operator used for adding one is written '++'.  It can be used to
increment a variable either before or after taking its value.  To
"pre-increment" a variable 'v', write '++v'.  This adds one to the value
of 'v'--that new value is also the value of the expression.  (The
assignment expression 'v += 1' is completely equivalent.)  Writing the
'++' after the variable specifies "post-increment".  This increments the
variable value just the same; the difference is that the value of the
increment expression itself is the variable's _old_ value.  Thus, if
'foo' has the value four, then the expression 'foo++' has the value
four, but it changes the value of 'foo' to five.  In other words, the
operator returns the old value of the variable, but with the side effect
of incrementing it.

   The post-increment 'foo++' is nearly the same as writing '(foo += 1)
- 1'.  It is not perfectly equivalent because all numbers in 'awk' are
floating point--in floating point, 'foo + 1 - 1' does not necessarily
equal 'foo'.  But the difference is minute as long as you stick to
numbers that are fairly small (less than 10e12).

   Fields and array elements are incremented just like variables.  (Use
'$(i++)' when you want to do a field reference and a variable increment
at the same time.  The parentheses are necessary because of the
precedence of the field reference operator '$'.)

   The decrement operator '--' works just like '++', except that it
subtracts one instead of adding it.  As with '++', it can be used before
the lvalue to pre-decrement or after it to post-decrement.  Following is
a summary of increment and decrement expressions:

'++LVALUE'
     Increment LVALUE, returning the new value as the value of the
     expression.

'LVALUE++'
     Increment LVALUE, returning the _old_ value of LVALUE as the value
     of the expression.

'--LVALUE'
     Decrement LVALUE, returning the new value as the value of the
     expression.  (This expression is like '++LVALUE', but instead of
     adding, it subtracts.)

'LVALUE--'
     Decrement LVALUE, returning the _old_ value of LVALUE as the value
     of the expression.  (This expression is like 'LVALUE++', but
     instead of adding, it subtracts.)

                       Operator Evaluation Order

     Doctor, it hurts when I do this!
     Then don't do that!
                           -- _Groucho Marx_

What happens for something like the following?

     b = 6
     print b += b++

Or something even stranger?

     b = 6
     b += ++b + b++
     print b

   In other words, when do the various side effects prescribed by the
postfix operators ('b++') take effect?  When side effects happen is
"implementation-defined".  In other words, it is up to the particular
version of 'awk'.  The result for the first example may be 12 or 13, and
for the second, it may be 22 or 23.

   In short, doing things like this is not recommended and definitely
not anything that you can rely upon for portability.  You should avoid
such things in your own programs.


File: gawk.info,  Node: Truth Values and Conditions,  Next: Function Calls,  Prev: All Operators,  Up: Expressions

6.3 Truth Values and Conditions
===============================

In certain contexts, expression values also serve as "truth values";
i.e., they determine what should happen next as the program runs.  This
minor node describes how 'awk' defines "true" and "false" and how values
are compared.

* Menu:

* Truth Values::                What is "true" and what is "false".
* Typing and Comparison::       How variables acquire types and how this
                                affects comparison of numbers and strings with
                                '<', etc.
* Boolean Ops::                 Combining comparison expressions using boolean
                                operators '||' ("or"), '&&'
                                ("and") and '!' ("not").
* Conditional Exp::             Conditional expressions select between two
                                subexpressions under control of a third
                                subexpression.


File: gawk.info,  Node: Truth Values,  Next: Typing and Comparison,  Up: Truth Values and Conditions

6.3.1 True and False in 'awk'
-----------------------------

Many programming languages have a special representation for the
concepts of "true" and "false."  Such languages usually use the special
constants 'true' and 'false', or perhaps their uppercase equivalents.
However, 'awk' is different.  It borrows a very simple concept of true
and false from C. In 'awk', any nonzero numeric value _or_ any nonempty
string value is true.  Any other value (zero or the null string, '""')
is false.  The following program prints 'A strange truth value' three
times:

     BEGIN {
        if (3.1415927)
            print "A strange truth value"
        if ("Four Score And Seven Years Ago")
            print "A strange truth value"
        if (j = 57)
            print "A strange truth value"
     }

   There is a surprising consequence of the "nonzero or non-null" rule:
the string constant '"0"' is actually true, because it is non-null.
(d.c.)


File: gawk.info,  Node: Typing and Comparison,  Next: Boolean Ops,  Prev: Truth Values,  Up: Truth Values and Conditions

6.3.2 Variable Typing and Comparison Expressions
------------------------------------------------

     The Guide is definitive.  Reality is frequently inaccurate.
      -- _Douglas Adams, 'The Hitchhiker's Guide to the Galaxy'_

   Unlike in other programming languages, in 'awk' variables do not have
a fixed type.  Instead, they can be either a number or a string,
depending upon the value that is assigned to them.  We look now at how
variables are typed, and how 'awk' compares variables.

* Menu:

* Variable Typing::             String type versus numeric type.
* Comparison Operators::        The comparison operators.
* POSIX String Comparison::     String comparison with POSIX rules.


File: gawk.info,  Node: Variable Typing,  Next: Comparison Operators,  Up: Typing and Comparison

6.3.2.1 String Type versus Numeric Type
.......................................

The POSIX standard introduced the concept of a "numeric string", which
is simply a string that looks like a number--for example, '" +2"'.  This
concept is used for determining the type of a variable.  The type of the
variable is important because the types of two variables determine how
they are compared.  Variable typing follows these rules:

   * A numeric constant or the result of a numeric operation has the
     "numeric" attribute.

   * A string constant or the result of a string operation has the
     "string" attribute.

   * Fields, 'getline' input, 'FILENAME', 'ARGV' elements, 'ENVIRON'
     elements, and the elements of an array created by 'match()',
     'split()', and 'patsplit()' that are numeric strings have the
     "strnum" attribute.  Otherwise, they have the "string" attribute.
     Uninitialized variables also have the "strnum" attribute.

   * Attributes propagate across assignments but are not changed by any
     use.

   The last rule is particularly important.  In the following program,
'a' has numeric type, even though it is later used in a string
operation:

     BEGIN {
          a = 12.345
          b = a " is a cute number"
          print b
     }

   When two operands are compared, either string comparison or numeric
comparison may be used.  This depends upon the attributes of the
operands, according to the following symmetric matrix:

             +-------------------------------
             |       STRING          NUMERIC         STRNUM
     -----+-------------------------------
             |
     STRING  |       string          string          string
             |
     NUMERIC |       string          numeric         numeric
             |
     STRNUM  |       string          numeric         numeric
     -----+-------------------------------

   The basic idea is that user input that looks numeric--and _only_ user
input--should be treated as numeric, even though it is actually made of
characters and is therefore also a string.  Thus, for example, the
string constant '" +3.14"', when it appears in program source code, is a
string--even though it looks numeric--and is _never_ treated as a number
for comparison purposes.

   In short, when one operand is a "pure" string, such as a string
constant, then a string comparison is performed.  Otherwise, a numeric
comparison is performed.

   This point bears additional emphasis: All user input is made of
characters, and so is first and foremost of string type; input strings
that look numeric are additionally given the strnum attribute.  Thus,
the six-character input string ' +3.14' receives the strnum attribute.
In contrast, the eight characters '" +3.14"' appearing in program text
comprise a string constant.  The following examples print '1' when the
comparison between the two different constants is true, and '0'
otherwise:

     $ echo ' +3.14' | awk '{ print($0 == " +3.14") }'    True
     -| 1
     $ echo ' +3.14' | awk '{ print($0 == "+3.14") }'     False
     -| 0
     $ echo ' +3.14' | awk '{ print($0 == "3.14") }'      False
     -| 0
     $ echo ' +3.14' | awk '{ print($0 == 3.14) }'        True
     -| 1
     $ echo ' +3.14' | awk '{ print($1 == " +3.14") }'    False
     -| 0
     $ echo ' +3.14' | awk '{ print($1 == "+3.14") }'     True
     -| 1
     $ echo ' +3.14' | awk '{ print($1 == "3.14") }'      False
     -| 0
     $ echo ' +3.14' | awk '{ print($1 == 3.14) }'        True
     -| 1


File: gawk.info,  Node: Comparison Operators,  Next: POSIX String Comparison,  Prev: Variable Typing,  Up: Typing and Comparison

6.3.2.2 Comparison Operators
............................

"Comparison expressions" compare strings or numbers for relationships
such as equality.  They are written using "relational operators", which
are a superset of those in C. *note Table 6.3: table-relational-ops.
describes them.

Expression         Result
--------------------------------------------------------------------------
X '<' Y            True if X is less than Y
X '<=' Y           True if X is less than or equal to Y
X '>' Y            True if X is greater than Y
X '>=' Y           True if X is greater than or equal to Y
X '==' Y           True if X is equal to Y
X '!=' Y           True if X is not equal to Y
X '~' Y            True if the string X matches the regexp denoted by Y
X '!~' Y           True if the string X does not match the regexp
                   denoted by Y
SUBSCRIPT 'in'     True if the array ARRAY has an element with the
ARRAY              subscript SUBSCRIPT

Table 6.3: Relational operators

   Comparison expressions have the value one if true and zero if false.
When comparing operands of mixed types, numeric operands are converted
to strings using the value of 'CONVFMT' (*note Conversion::).

   Strings are compared by comparing the first character of each, then
the second character of each, and so on.  Thus, '"10"' is less than
'"9"'.  If there are two strings where one is a prefix of the other, the
shorter string is less than the longer one.  Thus, '"abc"' is less than
'"abcd"'.

   It is very easy to accidentally mistype the '==' operator and leave
off one of the '=' characters.  The result is still valid 'awk' code,
but the program does not do what is intended:

     if (a = b)   # oops! should be a == b
        ...
     else
        ...

Unless 'b' happens to be zero or the null string, the 'if' part of the
test always succeeds.  Because the operators are so similar, this kind
of error is very difficult to spot when scanning the source code.

   The following list of expressions illustrates the kinds of
comparisons 'awk' performs, as well as what the result of each
comparison is:

'1.5 <= 2.0'
     Numeric comparison (true)

'"abc" >= "xyz"'
     String comparison (false)

'1.5 != " +2"'
     String comparison (true)

'"1e2" < "3"'
     String comparison (true)

'a = 2; b = "2"'
'a == b'
     String comparison (true)

'a = 2; b = " +2"'
'a == b'
     String comparison (false)

   In this example:

     $ echo 1e2 3 | awk '{ print ($1 < $2) ? "true" : "false" }'
     -| false

the result is 'false' because both '$1' and '$2' are user input.  They
are numeric strings--therefore both have the strnum attribute, dictating
a numeric comparison.  The purpose of the comparison rules and the use
of numeric strings is to attempt to produce the behavior that is "least
surprising," while still "doing the right thing."

   String comparisons and regular expression comparisons are very
different.  For example:

     x == "foo"

has the value one, or is true if the variable 'x' is precisely 'foo'.
By contrast:

     x ~ /foo/

has the value one if 'x' contains 'foo', such as '"Oh, what a fool am
I!"'.

   The righthand operand of the '~' and '!~' operators may be either a
regexp constant ('/'...'/') or an ordinary expression.  In the latter
case, the value of the expression as a string is used as a dynamic
regexp (*note Regexp Usage::; also *note Computed Regexps::).

   A constant regular expression in slashes by itself is also an
expression.  '/REGEXP/' is an abbreviation for the following comparison
expression:

     $0 ~ /REGEXP/

   One special place where '/foo/' is _not_ an abbreviation for '$0 ~
/foo/' is when it is the righthand operand of '~' or '!~'.  *Note Using
Constant Regexps::, where this is discussed in more detail.


File: gawk.info,  Node: POSIX String Comparison,  Prev: Comparison Operators,  Up: Typing and Comparison

6.3.2.3 String Comparison Based on Locale Collating Order
.........................................................

The POSIX standard used to say that all string comparisons are performed
based on the locale's "collating order".  This is the order in which
characters sort, as defined by the locale (for more discussion, *note
Locales::).  This order is usually very different from the results
obtained when doing straight byte-by-byte comparison.(1)

   Because this behavior differs considerably from existing practice,
'gawk' only implemented it when in POSIX mode (*note Options::).  Here
is an example to illustrate the difference, in an 'en_US.UTF-8' locale:

     $ gawk 'BEGIN { printf("ABC < abc = %s\n",
     >                     ("ABC" < "abc" ? "TRUE" : "FALSE")) }'
     -| ABC < abc = TRUE
     $ gawk --posix 'BEGIN { printf("ABC < abc = %s\n",
     >                             ("ABC" < "abc" ? "TRUE" : "FALSE")) }'
     -| ABC < abc = FALSE

   Fortunately, as of August 2016, comparison based on locale collating
order is no longer required for the '==' and '!=' operators.(2)
However, comparison based on locales is still required for '<', '<=',
'>', and '>='.  POSIX thus recommends as follows:

     Since the '==' operator checks whether strings are identical, not
     whether they collate equally, applications needing to check whether
     strings collate equally can use:

          a <= b && a >= b

   As of version 4.2, 'gawk' continues to use locale collating order for
'<', '<=', '>', and '>=' only in POSIX mode.

   ---------- Footnotes ----------

   (1) Technically, string comparison is supposed to behave the same way
as if the strings were compared with the C 'strcoll()' function.

   (2) See the Austin Group website
(http://austingroupbugs.net/view.php?id=1070).


File: gawk.info,  Node: Boolean Ops,  Next: Conditional Exp,  Prev: Typing and Comparison,  Up: Truth Values and Conditions

6.3.3 Boolean Expressions
-------------------------

A "Boolean expression" is a combination of comparison expressions or
matching expressions, using the Boolean operators "or" ('||'), "and"
('&&'), and "not" ('!'), along with parentheses to control nesting.  The
truth value of the Boolean expression is computed by combining the truth
values of the component expressions.  Boolean expressions are also
referred to as "logical expressions".  The terms are equivalent.

   Boolean expressions can be used wherever comparison and matching
expressions can be used.  They can be used in 'if', 'while', 'do', and
'for' statements (*note Statements::).  They have numeric values (one if
true, zero if false) that come into play if the result of the Boolean
expression is stored in a variable or used in arithmetic.

   In addition, every Boolean expression is also a valid pattern, so you
can use one as a pattern to control the execution of rules.  The Boolean
operators are:

'BOOLEAN1 && BOOLEAN2'
     True if both BOOLEAN1 and BOOLEAN2 are true.  For example, the
     following statement prints the current input record if it contains
     both 'edu' and 'li':

          if ($0 ~ /edu/ && $0 ~ /li/) print

     The subexpression BOOLEAN2 is evaluated only if BOOLEAN1 is true.
     This can make a difference when BOOLEAN2 contains expressions that
     have side effects.  In the case of '$0 ~ /foo/ && ($2 == bar++)',
     the variable 'bar' is not incremented if there is no substring
     'foo' in the record.

'BOOLEAN1 || BOOLEAN2'
     True if at least one of BOOLEAN1 or BOOLEAN2 is true.  For example,
     the following statement prints all records in the input that
     contain _either_ 'edu' or 'li':

          if ($0 ~ /edu/ || $0 ~ /li/) print

     The subexpression BOOLEAN2 is evaluated only if BOOLEAN1 is false.
     This can make a difference when BOOLEAN2 contains expressions that
     have side effects.  (Thus, this test never really distinguishes
     records that contain both 'edu' and 'li'--as soon as 'edu' is
     matched, the full test succeeds.)

'! BOOLEAN'
     True if BOOLEAN is false.  For example, the following program
     prints 'no home!' in the unusual event that the 'HOME' environment
     variable is not defined:

          BEGIN { if (! ("HOME" in ENVIRON))
                      print "no home!" }

     (The 'in' operator is described in *note Reference to Elements::.)

   The '&&' and '||' operators are called "short-circuit" operators
because of the way they work.  Evaluation of the full expression is
"short-circuited" if the result can be determined partway through its
evaluation.

   Statements that end with '&&' or '||' can be continued simply by
putting a newline after them.  But you cannot put a newline in front of
either of these operators without using backslash continuation (*note
Statements/Lines::).

   The actual value of an expression using the '!' operator is either
one or zero, depending upon the truth value of the expression it is
applied to.  The '!' operator is often useful for changing the sense of
a flag variable from false to true and back again.  For example, the
following program is one way to print lines in between special
bracketing lines:

     $1 == "START"   { interested = ! interested; next }
     interested      { print }
     $1 == "END"     { interested = ! interested; next }

The variable 'interested', as with all 'awk' variables, starts out
initialized to zero, which is also false.  When a line is seen whose
first field is 'START', the value of 'interested' is toggled to true,
using '!'.  The next rule prints lines as long as 'interested' is true.
When a line is seen whose first field is 'END', 'interested' is toggled
back to false.(1)

   Most commonly, the '!' operator is used in the conditions of 'if' and
'while' statements, where it often makes more sense to phrase the logic
in the negative:

     if (! SOME CONDITION || SOME OTHER CONDITION) {
         ... DO WHATEVER PROCESSING ...
     }

     NOTE: The 'next' statement is discussed in *note Next Statement::.
     'next' tells 'awk' to skip the rest of the rules, get the next
     record, and start processing the rules over again at the top.  The
     reason it's there is to avoid printing the bracketing 'START' and
     'END' lines.

   ---------- Footnotes ----------

   (1) This program has a bug; it prints lines starting with 'END'.  How
would you fix it?


File: gawk.info,  Node: Conditional Exp,  Prev: Boolean Ops,  Up: Truth Values and Conditions

6.3.4 Conditional Expressions
-----------------------------

A "conditional expression" is a special kind of expression that has
three operands.  It allows you to use one expression's value to select
one of two other expressions.  The conditional expression in 'awk' is
the same as in the C language, as shown here:

     SELECTOR ? IF-TRUE-EXP : IF-FALSE-EXP

There are three subexpressions.  The first, SELECTOR, is always computed
first.  If it is "true" (not zero or not null), then IF-TRUE-EXP is
computed next, and its value becomes the value of the whole expression.
Otherwise, IF-FALSE-EXP is computed next, and its value becomes the
value of the whole expression.  For example, the following expression
produces the absolute value of 'x':

     x >= 0 ? x : -x

   Each time the conditional expression is computed, only one of
IF-TRUE-EXP and IF-FALSE-EXP is used; the other is ignored.  This is
important when the expressions have side effects.  For example, this
conditional expression examines element 'i' of either array 'a' or array
'b', and increments 'i':

     x == y ? a[i++] : b[i++]

This is guaranteed to increment 'i' exactly once, because each time only
one of the two increment expressions is executed and the other is not.
*Note Arrays::, for more information about arrays.

   As a minor 'gawk' extension, a statement that uses '?:' can be
continued simply by putting a newline after either character.  However,
putting a newline in front of either character does not work without
using backslash continuation (*note Statements/Lines::).  If '--posix'
is specified (*note Options::), this extension is disabled.


File: gawk.info,  Node: Function Calls,  Next: Precedence,  Prev: Truth Values and Conditions,  Up: Expressions

6.4 Function Calls
==================

A "function" is a name for a particular calculation.  This enables you
to ask for it by name at any point in the program.  For example, the
function 'sqrt()' computes the square root of a number.

   A fixed set of functions are "built in", which means they are
available in every 'awk' program.  The 'sqrt()' function is one of
these.  *Note Built-in:: for a list of built-in functions and their
descriptions.  In addition, you can define functions for use in your
program.  *Note User-defined:: for instructions on how to do this.
Finally, 'gawk' lets you write functions in C or C++ that may be called
from your program (*note Dynamic Extensions::).

   The way to use a function is with a "function call" expression, which
consists of the function name followed immediately by a list of
"arguments" in parentheses.  The arguments are expressions that provide
the raw materials for the function's calculations.  When there is more
than one argument, they are separated by commas.  If there are no
arguments, just write '()' after the function name.  The following
examples show function calls with and without arguments:

     sqrt(x^2 + y^2)        one argument
     atan2(y, x)            two arguments
     rand()                 no arguments

     CAUTION: Do not put any space between the function name and the
     opening parenthesis!  A user-defined function name looks just like
     the name of a variable--a space would make the expression look like
     concatenation of a variable with an expression inside parentheses.
     With built-in functions, space before the parenthesis is harmless,
     but it is best not to get into the habit of using space to avoid
     mistakes with user-defined functions.

   Each function expects a particular number of arguments.  For example,
the 'sqrt()' function must be called with a single argument, the number
of which to take the square root:

     sqrt(ARGUMENT)

   Some of the built-in functions have one or more optional arguments.
If those arguments are not supplied, the functions use a reasonable
default value.  *Note Built-in:: for full details.  If arguments are
omitted in calls to user-defined functions, then those arguments are
treated as local variables.  Such local variables act like the empty
string if referenced where a string value is required, and like zero if
referenced where a numeric value is required (*note User-defined::).

   As an advanced feature, 'gawk' provides indirect function calls,
which is a way to choose the function to call at runtime, instead of
when you write the source code to your program.  We defer discussion of
this feature until later; see *note Indirect Calls::.

   Like every other expression, the function call has a value, often
called the "return value", which is computed by the function based on
the arguments you give it.  In this example, the return value of
'sqrt(ARGUMENT)' is the square root of ARGUMENT.  The following program
reads numbers, one number per line, and prints the square root of each
one:

     $ awk '{ print "The square root of", $1, "is", sqrt($1) }'
     1
     -| The square root of 1 is 1
     3
     -| The square root of 3 is 1.73205
     5
     -| The square root of 5 is 2.23607
     Ctrl-d

   A function can also have side effects, such as assigning values to
certain variables or doing I/O. This program shows how the 'match()'
function (*note String Functions::) changes the variables 'RSTART' and
'RLENGTH':

     {
         if (match($1, $2))
             print RSTART, RLENGTH
         else
             print "no match"
     }

Here is a sample run:

     $ awk -f matchit.awk
     aaccdd  c+
     -| 3 2
     foo     bar
     -| no match
     abcdefg e
     -| 5 1


File: gawk.info,  Node: Precedence,  Next: Locales,  Prev: Function Calls,  Up: Expressions

6.5 Operator Precedence (How Operators Nest)
============================================

"Operator precedence" determines how operators are grouped when
different operators appear close by in one expression.  For example, '*'
has higher precedence than '+'; thus, 'a + b * c' means to multiply 'b'
and 'c', and then add 'a' to the product (i.e., 'a + (b * c)').

   The normal precedence of the operators can be overruled by using
parentheses.  Think of the precedence rules as saying where the
parentheses are assumed to be.  In fact, it is wise to always use
parentheses whenever there is an unusual combination of operators,
because other people who read the program may not remember what the
precedence is in this case.  Even experienced programmers occasionally
forget the exact rules, which leads to mistakes.  Explicit parentheses
help prevent any such mistakes.

   When operators of equal precedence are used together, the leftmost
operator groups first, except for the assignment, conditional, and
exponentiation operators, which group in the opposite order.  Thus, 'a -
b + c' groups as '(a - b) + c' and 'a = b = c' groups as 'a = (b = c)'.

   Normally the precedence of prefix unary operators does not matter,
because there is only one way to interpret them: innermost first.  Thus,
'$++i' means '$(++i)' and '++$x' means '++($x)'.  However, when another
operator follows the operand, then the precedence of the unary operators
can matter.  '$x^2' means '($x)^2', but '-x^2' means '-(x^2)', because
'-' has lower precedence than '^', whereas '$' has higher precedence.
Also, operators cannot be combined in a way that violates the precedence
rules; for example, '$$0++--' is not a valid expression because the
first '$' has higher precedence than the '++'; to avoid the problem the
expression can be rewritten as '$($0++)--'.

   This list presents 'awk''s operators, in order of highest to lowest
precedence:

'('...')'
     Grouping.

'$'
     Field reference.

'++ --'
     Increment, decrement.

'^ **'
     Exponentiation.  These operators group right to left.

'+ - !'
     Unary plus, minus, logical "not."

'* / %'
     Multiplication, division, remainder.

'+ -'
     Addition, subtraction.

String concatenation
     There is no special symbol for concatenation.  The operands are
     simply written side by side (*note Concatenation::).

'< <= == != > >= >> | |&'
     Relational and redirection.  The relational operators and the
     redirections have the same precedence level.  Characters such as
     '>' serve both as relationals and as redirections; the context
     distinguishes between the two meanings.

     Note that the I/O redirection operators in 'print' and 'printf'
     statements belong to the statement level, not to expressions.  The
     redirection does not produce an expression that could be the
     operand of another operator.  As a result, it does not make sense
     to use a redirection operator near another operator of lower
     precedence without parentheses.  Such combinations (e.g., 'print
     foo > a ? b : c') result in syntax errors.  The correct way to
     write this statement is 'print foo > (a ? b : c)'.

'~ !~'
     Matching, nonmatching.

'in'
     Array membership.

'&&'
     Logical "and."

'||'
     Logical "or."

'?:'
     Conditional.  This operator groups right to left.

'= += -= *= /= %= ^= **='
     Assignment.  These operators group right to left.

     NOTE: The '|&', '**', and '**=' operators are not specified by
     POSIX. For maximum portability, do not use them.


File: gawk.info,  Node: Locales,  Next: Expressions Summary,  Prev: Precedence,  Up: Expressions

6.6 Where You Are Makes a Difference
====================================

Modern systems support the notion of "locales": a way to tell the system
about the local character set and language.  The ISO C standard defines
a default '"C"' locale, which is an environment that is typical of what
many C programmers are used to.

   Once upon a time, the locale setting used to affect regexp matching,
but this is no longer true (*note Ranges and Locales::).

   Locales can affect record splitting.  For the normal case of 'RS =
"\n"', the locale is largely irrelevant.  For other single-character
record separators, setting 'LC_ALL=C' in the environment will give you
much better performance when reading records.  Otherwise, 'gawk' has to
make several function calls, _per input character_, to find the record
terminator.

   Locales can affect how dates and times are formatted (*note Time
Functions::).  For example, a common way to abbreviate the date
September 4, 2015, in the United States is "9/4/15."  In many countries
in Europe, however, it is abbreviated "4.9.15."  Thus, the '%x'
specification in a '"US"' locale might produce '9/4/15', while in a
'"EUROPE"' locale, it might produce '4.9.15'.

   According to POSIX, string comparison is also affected by locales
(similar to regular expressions).  The details are presented in *note
POSIX String Comparison::.

   Finally, the locale affects the value of the decimal point character
used when 'gawk' parses input data.  This is discussed in detail in
*note Conversion::.


File: gawk.info,  Node: Expressions Summary,  Prev: Locales,  Up: Expressions

6.7 Summary
===========

   * Expressions are the basic elements of computation in programs.
     They are built from constants, variables, function calls, and
     combinations of the various kinds of values with operators.

   * 'awk' supplies three kinds of constants: numeric, string, and
     regexp.  'gawk' lets you specify numeric constants in octal and
     hexadecimal (bases 8 and 16) as well as decimal (base 10).  In
     certain contexts, a standalone regexp constant such as '/foo/' has
     the same meaning as '$0 ~ /foo/'.

   * Variables hold values between uses in computations.  A number of
     built-in variables provide information to your 'awk' program, and a
     number of others let you control how 'awk' behaves.

   * Numbers are automatically converted to strings, and strings to
     numbers, as needed by 'awk'.  Numeric values are converted as if
     they were formatted with 'sprintf()' using the format in 'CONVFMT'.
     Locales can influence the conversions.

   * 'awk' provides the usual arithmetic operators (addition,
     subtraction, multiplication, division, modulus), and unary plus and
     minus.  It also provides comparison operators, Boolean operators,
     an array membership testing operator, and regexp matching
     operators.  String concatenation is accomplished by placing two
     expressions next to each other; there is no explicit operator.  The
     three-operand '?:' operator provides an "if-else" test within
     expressions.

   * Assignment operators provide convenient shorthands for common
     arithmetic operations.

   * In 'awk', a value is considered to be true if it is nonzero _or_
     non-null.  Otherwise, the value is false.

   * A variable's type is set upon each assignment and may change over
     its lifetime.  The type determines how it behaves in comparisons
     (string or numeric).

   * Function calls return a value that may be used as part of a larger
     expression.  Expressions used to pass parameter values are fully
     evaluated before the function is called.  'awk' provides built-in
     and user-defined functions; this is described in *note Functions::.

   * Operator precedence specifies the order in which operations are
     performed, unless explicitly overridden by parentheses.  'awk''s
     operator precedence is compatible with that of C.

   * Locales can affect the format of data as output by an 'awk'
     program, and occasionally the format for data read as input.


File: gawk.info,  Node: Patterns and Actions,  Next: Arrays,  Prev: Expressions,  Up: Top

7 Patterns, Actions, and Variables
**********************************

As you have already seen, each 'awk' statement consists of a pattern
with an associated action.  This major node describes how you build
patterns and actions, what kinds of things you can do within actions,
and 'awk''s predefined variables.

   The pattern-action rules and the statements available for use within
actions form the core of 'awk' programming.  In a sense, everything
covered up to here has been the foundation that programs are built on
top of.  Now it's time to start building something useful.

* Menu:

* Pattern Overview::            What goes into a pattern.
* Using Shell Variables::       How to use shell variables with 'awk'.
* Action Overview::             What goes into an action.
* Statements::                  Describes the various control statements in
                                detail.
* Built-in Variables::          Summarizes the predefined variables.
* Pattern Action Summary::      Patterns and Actions summary.


File: gawk.info,  Node: Pattern Overview,  Next: Using Shell Variables,  Up: Patterns and Actions

7.1 Pattern Elements
====================

* Menu:

* Regexp Patterns::             Using regexps as patterns.
* Expression Patterns::         Any expression can be used as a pattern.
* Ranges::                      Pairs of patterns specify record ranges.
* BEGIN/END::                   Specifying initialization and cleanup rules.
* BEGINFILE/ENDFILE::           Two special patterns for advanced control.
* Empty::                       The empty pattern, which matches every record.

Patterns in 'awk' control the execution of rules--a rule is executed
when its pattern matches the current input record.  The following is a
summary of the types of 'awk' patterns:

'/REGULAR EXPRESSION/'
     A regular expression.  It matches when the text of the input record
     fits the regular expression.  (*Note Regexp::.)

'EXPRESSION'
     A single expression.  It matches when its value is nonzero (if a
     number) or non-null (if a string).  (*Note Expression Patterns::.)

'BEGPAT, ENDPAT'
     A pair of patterns separated by a comma, specifying a "range" of
     records.  The range includes both the initial record that matches
     BEGPAT and the final record that matches ENDPAT.  (*Note Ranges::.)

'BEGIN'
'END'
     Special patterns for you to supply startup or cleanup actions for
     your 'awk' program.  (*Note BEGIN/END::.)

'BEGINFILE'
'ENDFILE'
     Special patterns for you to supply startup or cleanup actions to be
     done on a per-file basis.  (*Note BEGINFILE/ENDFILE::.)

'EMPTY'
     The empty pattern matches every input record.  (*Note Empty::.)


File: gawk.info,  Node: Regexp Patterns,  Next: Expression Patterns,  Up: Pattern Overview

7.1.1 Regular Expressions as Patterns
-------------------------------------

Regular expressions are one of the first kinds of patterns presented in
this book.  This kind of pattern is simply a regexp constant in the
pattern part of a rule.  Its meaning is '$0 ~ /PATTERN/'.  The pattern
matches when the input record matches the regexp.  For example:

     /foo|bar|baz/  { buzzwords++ }
     END            { print buzzwords, "buzzwords seen" }


File: gawk.info,  Node: Expression Patterns,  Next: Ranges,  Prev: Regexp Patterns,  Up: Pattern Overview

7.1.2 Expressions as Patterns
-----------------------------

Any 'awk' expression is valid as an 'awk' pattern.  The pattern matches
if the expression's value is nonzero (if a number) or non-null (if a
string).  The expression is reevaluated each time the rule is tested
against a new input record.  If the expression uses fields such as '$1',
the value depends directly on the new input record's text; otherwise, it
depends on only what has happened so far in the execution of the 'awk'
program.

   Comparison expressions, using the comparison operators described in
*note Typing and Comparison::, are a very common kind of pattern.
Regexp matching and nonmatching are also very common expressions.  The
left operand of the '~' and '!~' operators is a string.  The right
operand is either a constant regular expression enclosed in slashes
('/REGEXP/'), or any expression whose string value is used as a dynamic
regular expression (*note Computed Regexps::).  The following example
prints the second field of each input record whose first field is
precisely 'li':

     $ awk '$1 == "li" { print $2 }' mail-list

(There is no output, because there is no person with the exact name
'li'.)  Contrast this with the following regular expression match, which
accepts any record with a first field that contains 'li':

     $ awk '$1 ~ /li/ { print $2 }' mail-list
     -| 555-5553
     -| 555-6699

   A regexp constant as a pattern is also a special case of an
expression pattern.  The expression '/li/' has the value one if 'li'
appears in the current input record.  Thus, as a pattern, '/li/' matches
any record containing 'li'.

   Boolean expressions are also commonly used as patterns.  Whether the
pattern matches an input record depends on whether its subexpressions
match.  For example, the following command prints all the records in
'mail-list' that contain both 'edu' and 'li':

     $ awk '/edu/ && /li/' mail-list
     -| Samuel       555-3430     samuel.lanceolis@shu.edu        A

   The following command prints all records in 'mail-list' that contain
_either_ 'edu' or 'li' (or both, of course):

     $ awk '/edu/ || /li/' mail-list
     -| Amelia       555-5553     amelia.zodiacusque@gmail.com    F
     -| Broderick    555-0542     broderick.aliquotiens@yahoo.com R
     -| Fabius       555-1234     fabius.undevicesimus@ucb.edu    F
     -| Julie        555-6699     julie.perscrutabor@skeeve.com   F
     -| Samuel       555-3430     samuel.lanceolis@shu.edu        A
     -| Jean-Paul    555-2127     jeanpaul.campanorum@nyu.edu     R

   The following command prints all records in 'mail-list' that do _not_
contain the string 'li':

     $ awk '! /li/' mail-list
     -| Anthony      555-3412     anthony.asserturo@hotmail.com   A
     -| Becky        555-7685     becky.algebrarum@gmail.com      A
     -| Bill         555-1675     bill.drowning@hotmail.com       A
     -| Camilla      555-2912     camilla.infusarum@skynet.be     R
     -| Fabius       555-1234     fabius.undevicesimus@ucb.edu    F
     -| Martin       555-6480     martin.codicibus@hotmail.com    A
     -| Jean-Paul    555-2127     jeanpaul.campanorum@nyu.edu     R

   The subexpressions of a Boolean operator in a pattern can be constant
regular expressions, comparisons, or any other 'awk' expressions.  Range
patterns are not expressions, so they cannot appear inside Boolean
patterns.  Likewise, the special patterns 'BEGIN', 'END', 'BEGINFILE',
and 'ENDFILE', which never match any input record, are not expressions
and cannot appear inside Boolean patterns.

   The precedence of the different operators that can appear in patterns
is described in *note Precedence::.


File: gawk.info,  Node: Ranges,  Next: BEGIN/END,  Prev: Expression Patterns,  Up: Pattern Overview

7.1.3 Specifying Record Ranges with Patterns
--------------------------------------------

A "range pattern" is made of two patterns separated by a comma, in the
form 'BEGPAT, ENDPAT'.  It is used to match ranges of consecutive input
records.  The first pattern, BEGPAT, controls where the range begins,
while ENDPAT controls where the pattern ends.  For example, the
following:

     awk '$1 == "on", $1 == "off"' myfile

prints every record in 'myfile' between 'on'/'off' pairs, inclusive.

   A range pattern starts out by matching BEGPAT against every input
record.  When a record matches BEGPAT, the range pattern is "turned on",
and the range pattern matches this record as well.  As long as the range
pattern stays turned on, it automatically matches every input record
read.  The range pattern also matches ENDPAT against every input record;
when this succeeds, the range pattern is "turned off" again for the
following record.  Then the range pattern goes back to checking BEGPAT
against each record.

   The record that turns on the range pattern and the one that turns it
off both match the range pattern.  If you don't want to operate on these
records, you can write 'if' statements in the rule's action to
distinguish them from the records you are interested in.

   It is possible for a pattern to be turned on and off by the same
record.  If the record satisfies both conditions, then the action is
executed for just that record.  For example, suppose there is text
between two identical markers (e.g., the '%' symbol), each on its own
line, that should be ignored.  A first attempt would be to combine a
range pattern that describes the delimited text with the 'next'
statement (not discussed yet, *note Next Statement::).  This causes
'awk' to skip any further processing of the current record and start
over again with the next input record.  Such a program looks like this:

     /^%$/,/^%$/    { next }
                    { print }

This program fails because the range pattern is both turned on and
turned off by the first line, which just has a '%' on it.  To accomplish
this task, write the program in the following manner, using a flag:

     /^%$/     { skip = ! skip; next }
     skip == 1 { next } # skip lines with `skip' set

   In a range pattern, the comma (',') has the lowest precedence of all
the operators (i.e., it is evaluated last).  Thus, the following program
attempts to combine a range pattern with another, simpler test:

     echo Yes | awk '/1/,/2/ || /Yes/'

   The intent of this program is '(/1/,/2/) || /Yes/'.  However, 'awk'
interprets this as '/1/, (/2/ || /Yes/)'.  This cannot be changed or
worked around; range patterns do not combine with other patterns:

     $ echo Yes | gawk '(/1/,/2/) || /Yes/'
     error-> gawk: cmd. line:1: (/1/,/2/) || /Yes/
     error-> gawk: cmd. line:1:           ^ syntax error

   As a minor point of interest, although it is poor style, POSIX allows
you to put a newline after the comma in a range pattern.  (d.c.)


File: gawk.info,  Node: BEGIN/END,  Next: BEGINFILE/ENDFILE,  Prev: Ranges,  Up: Pattern Overview

7.1.4 The 'BEGIN' and 'END' Special Patterns
--------------------------------------------

All the patterns described so far are for matching input records.  The
'BEGIN' and 'END' special patterns are different.  They supply startup
and cleanup actions for 'awk' programs.  'BEGIN' and 'END' rules must
have actions; there is no default action for these rules because there
is no current record when they run.  'BEGIN' and 'END' rules are often
referred to as "'BEGIN' and 'END' blocks" by longtime 'awk' programmers.

* Menu:

* Using BEGIN/END::             How and why to use BEGIN/END rules.
* I/O And BEGIN/END::           I/O issues in BEGIN/END rules.


File: gawk.info,  Node: Using BEGIN/END,  Next: I/O And BEGIN/END,  Up: BEGIN/END

7.1.4.1 Startup and Cleanup Actions
...................................

A 'BEGIN' rule is executed once only, before the first input record is
read.  Likewise, an 'END' rule is executed once only, after all the
input is read.  For example:

     $ awk '
     > BEGIN { print "Analysis of \"li\"" }
     > /li/  { ++n }
     > END   { print "\"li\" appears in", n, "records." }' mail-list
     -| Analysis of "li"
     -| "li" appears in 4 records.

   This program finds the number of records in the input file
'mail-list' that contain the string 'li'.  The 'BEGIN' rule prints a
title for the report.  There is no need to use the 'BEGIN' rule to
initialize the counter 'n' to zero, as 'awk' does this automatically
(*note Variables::).  The second rule increments the variable 'n' every
time a record containing the pattern 'li' is read.  The 'END' rule
prints the value of 'n' at the end of the run.

   The special patterns 'BEGIN' and 'END' cannot be used in ranges or
with Boolean operators (indeed, they cannot be used with any operators).
An 'awk' program may have multiple 'BEGIN' and/or 'END' rules.  They are
executed in the order in which they appear: all the 'BEGIN' rules at
startup and all the 'END' rules at termination.  'BEGIN' and 'END' rules
may be intermixed with other rules.  This feature was added in the 1987
version of 'awk' and is included in the POSIX standard.  The original
(1978) version of 'awk' required the 'BEGIN' rule to be placed at the
beginning of the program, the 'END' rule to be placed at the end, and
only allowed one of each.  This is no longer required, but it is a good
idea to follow this template in terms of program organization and
readability.

   Multiple 'BEGIN' and 'END' rules are useful for writing library
functions, because each library file can have its own 'BEGIN' and/or
'END' rule to do its own initialization and/or cleanup.  The order in
which library functions are named on the command line controls the order
in which their 'BEGIN' and 'END' rules are executed.  Therefore, you
have to be careful when writing such rules in library files so that the
order in which they are executed doesn't matter.  *Note Options:: for
more information on using library functions.  *Note Library Functions::,
for a number of useful library functions.

   If an 'awk' program has only 'BEGIN' rules and no other rules, then
the program exits after the 'BEGIN' rules are run.(1)  However, if an
'END' rule exists, then the input is read, even if there are no other
rules in the program.  This is necessary in case the 'END' rule checks
the 'FNR' and 'NR' variables.

   ---------- Footnotes ----------

   (1) The original version of 'awk' kept reading and ignoring input
until the end of the file was seen.


File: gawk.info,  Node: I/O And BEGIN/END,  Prev: Using BEGIN/END,  Up: BEGIN/END

7.1.4.2 Input/Output from 'BEGIN' and 'END' Rules
.................................................

There are several (sometimes subtle) points to be aware of when doing
I/O from a 'BEGIN' or 'END' rule.  The first has to do with the value of
'$0' in a 'BEGIN' rule.  Because 'BEGIN' rules are executed before any
input is read, there simply is no input record, and therefore no fields,
when executing 'BEGIN' rules.  References to '$0' and the fields yield a
null string or zero, depending upon the context.  One way to give '$0' a
real value is to execute a 'getline' command without a variable (*note
Getline::).  Another way is simply to assign a value to '$0'.

   The second point is similar to the first, but from the other
direction.  Traditionally, due largely to implementation issues, '$0'
and 'NF' were _undefined_ inside an 'END' rule.  The POSIX standard
specifies that 'NF' is available in an 'END' rule.  It contains the
number of fields from the last input record.  Most probably due to an
oversight, the standard does not say that '$0' is also preserved,
although logically one would think that it should be.  In fact, all of
BWK 'awk', 'mawk', and 'gawk' preserve the value of '$0' for use in
'END' rules.  Be aware, however, that some other implementations and
many older versions of Unix 'awk' do not.

   The third point follows from the first two.  The meaning of 'print'
inside a 'BEGIN' or 'END' rule is the same as always: 'print $0'.  If
'$0' is the null string, then this prints an empty record.  Many
longtime 'awk' programmers use an unadorned 'print' in 'BEGIN' and 'END'
rules, to mean 'print ""', relying on '$0' being null.  Although one
might generally get away with this in 'BEGIN' rules, it is a very bad
idea in 'END' rules, at least in 'gawk'.  It is also poor style, because
if an empty line is needed in the output, the program should print one
explicitly.

   Finally, the 'next' and 'nextfile' statements are not allowed in a
'BEGIN' rule, because the implicit
read-a-record-and-match-against-the-rules loop has not started yet.
Similarly, those statements are not valid in an 'END' rule, because all
the input has been read.  (*Note Next Statement:: and *note Nextfile
Statement::.)


File: gawk.info,  Node: BEGINFILE/ENDFILE,  Next: Empty,  Prev: BEGIN/END,  Up: Pattern Overview

7.1.5 The 'BEGINFILE' and 'ENDFILE' Special Patterns
----------------------------------------------------

This minor node describes a 'gawk'-specific feature.

   Two special kinds of rule, 'BEGINFILE' and 'ENDFILE', give you
"hooks" into 'gawk''s command-line file processing loop.  As with the
'BEGIN' and 'END' rules (*note BEGIN/END::), all 'BEGINFILE' rules in a
program are merged, in the order they are read by 'gawk', and all
'ENDFILE' rules are merged as well.

   The body of the 'BEGINFILE' rules is executed just before 'gawk'
reads the first record from a file.  'FILENAME' is set to the name of
the current file, and 'FNR' is set to zero.

   The 'BEGINFILE' rule provides you the opportunity to accomplish two
tasks that would otherwise be difficult or impossible to perform:

   * You can test if the file is readable.  Normally, it is a fatal
     error if a file named on the command line cannot be opened for
     reading.  However, you can bypass the fatal error and move on to
     the next file on the command line.

     You do this by checking if the 'ERRNO' variable is not the empty
     string; if so, then 'gawk' was not able to open the file.  In this
     case, your program can execute the 'nextfile' statement (*note
     Nextfile Statement::).  This causes 'gawk' to skip the file
     entirely.  Otherwise, 'gawk' exits with the usual fatal error.

   * If you have written extensions that modify the record handling (by
     inserting an "input parser"; *note Input Parsers::), you can invoke
     them at this point, before 'gawk' has started processing the file.
     (This is a _very_ advanced feature, currently used only by the
     'gawkextlib' project (http://sourceforge.net/projects/gawkextlib).)

   The 'ENDFILE' rule is called when 'gawk' has finished processing the
last record in an input file.  For the last input file, it will be
called before any 'END' rules.  The 'ENDFILE' rule is executed even for
empty input files.

   Normally, when an error occurs when reading input in the normal
input-processing loop, the error is fatal.  However, if an 'ENDFILE'
rule is present, the error becomes non-fatal, and instead 'ERRNO' is
set.  This makes it possible to catch and process I/O errors at the
level of the 'awk' program.

   The 'next' statement (*note Next Statement::) is not allowed inside
either a 'BEGINFILE' or an 'ENDFILE' rule.  The 'nextfile' statement is
allowed only inside a 'BEGINFILE' rule, not inside an 'ENDFILE' rule.

   The 'getline' statement (*note Getline::) is restricted inside both
'BEGINFILE' and 'ENDFILE': only redirected forms of 'getline' are
allowed.

   'BEGINFILE' and 'ENDFILE' are 'gawk' extensions.  In most other 'awk'
implementations, or if 'gawk' is in compatibility mode (*note
Options::), they are not special.


File: gawk.info,  Node: Empty,  Prev: BEGINFILE/ENDFILE,  Up: Pattern Overview

7.1.6 The Empty Pattern
-----------------------

An empty (i.e., nonexistent) pattern is considered to match _every_
input record.  For example, the program:

     awk '{ print $1 }' mail-list

prints the first field of every record.


File: gawk.info,  Node: Using Shell Variables,  Next: Action Overview,  Prev: Pattern Overview,  Up: Patterns and Actions

7.2 Using Shell Variables in Programs
=====================================

'awk' programs are often used as components in larger programs written
in shell.  For example, it is very common to use a shell variable to
hold a pattern that the 'awk' program searches for.  There are two ways
to get the value of the shell variable into the body of the 'awk'
program.

   A common method is to use shell quoting to substitute the variable's
value into the program inside the script.  For example, consider the
following program:

     printf "Enter search pattern: "
     read pattern
     awk "/$pattern/ "'{ nmatches++ }
          END { print nmatches, "found" }' /path/to/data

The 'awk' program consists of two pieces of quoted text that are
concatenated together to form the program.  The first part is
double-quoted, which allows substitution of the 'pattern' shell variable
inside the quotes.  The second part is single-quoted.

   Variable substitution via quoting works, but can potentially be
messy.  It requires a good understanding of the shell's quoting rules
(*note Quoting::), and it's often difficult to correctly match up the
quotes when reading the program.

   A better method is to use 'awk''s variable assignment feature (*note
Assignment Options::) to assign the shell variable's value to an 'awk'
variable.  Then use dynamic regexps to match the pattern (*note Computed
Regexps::).  The following shows how to redo the previous example using
this technique:

     printf "Enter search pattern: "
     read pattern
     awk -v pat="$pattern" '$0 ~ pat { nmatches++ }
            END { print nmatches, "found" }' /path/to/data

Now, the 'awk' program is just one single-quoted string.  The assignment
'-v pat="$pattern"' still requires double quotes, in case there is
whitespace in the value of '$pattern'.  The 'awk' variable 'pat' could
be named 'pattern' too, but that would be more confusing.  Using a
variable also provides more flexibility, as the variable can be used
anywhere inside the program--for printing, as an array subscript, or for
any other use--without requiring the quoting tricks at every point in
the program.


File: gawk.info,  Node: Action Overview,  Next: Statements,  Prev: Using Shell Variables,  Up: Patterns and Actions

7.3 Actions
===========

An 'awk' program or script consists of a series of rules and function
definitions interspersed.  (Functions are described later.  *Note
User-defined::.)  A rule contains a pattern and an action, either of
which (but not both) may be omitted.  The purpose of the "action" is to
tell 'awk' what to do once a match for the pattern is found.  Thus, in
outline, an 'awk' program generally looks like this:

     [PATTERN]  '{ ACTION }'
      PATTERN  ['{ ACTION }']
     ...
     'function NAME(ARGS) { ... }'
     ...

   An action consists of one or more 'awk' "statements", enclosed in
braces ('{...}').  Each statement specifies one thing to do.  The
statements are separated by newlines or semicolons.  The braces around
an action must be used even if the action contains only one statement,
or if it contains no statements at all.  However, if you omit the action
entirely, omit the braces as well.  An omitted action is equivalent to
'{ print $0 }':

     /foo/  { }     match 'foo', do nothing -- empty action
     /foo/          match 'foo', print the record -- omitted action

   The following types of statements are supported in 'awk':

Expressions
     Call functions or assign values to variables (*note Expressions::).
     Executing this kind of statement simply computes the value of the
     expression.  This is useful when the expression has side effects
     (*note Assignment Ops::).

Control statements
     Specify the control flow of 'awk' programs.  The 'awk' language
     gives you C-like constructs ('if', 'for', 'while', and 'do') as
     well as a few special ones (*note Statements::).

Compound statements
     Enclose one or more statements in braces.  A compound statement is
     used in order to put several statements together in the body of an
     'if', 'while', 'do', or 'for' statement.

Input statements
     Use the 'getline' command (*note Getline::).  Also supplied in
     'awk' are the 'next' statement (*note Next Statement::) and the
     'nextfile' statement (*note Nextfile Statement::).

Output statements
     Such as 'print' and 'printf'.  *Note Printing::.

Deletion statements
     For deleting array elements.  *Note Delete::.


File: gawk.info,  Node: Statements,  Next: Built-in Variables,  Prev: Action Overview,  Up: Patterns and Actions

7.4 Control Statements in Actions
=================================

"Control statements", such as 'if', 'while', and so on, control the flow
of execution in 'awk' programs.  Most of 'awk''s control statements are
patterned after similar statements in C.

   All the control statements start with special keywords, such as 'if'
and 'while', to distinguish them from simple expressions.  Many control
statements contain other statements.  For example, the 'if' statement
contains another statement that may or may not be executed.  The
contained statement is called the "body".  To include more than one
statement in the body, group them into a single "compound statement"
with braces, separating them with newlines or semicolons.

* Menu:

* If Statement::                Conditionally execute some 'awk'
                                statements.
* While Statement::             Loop until some condition is satisfied.
* Do Statement::                Do specified action while looping until some
                                condition is satisfied.
* For Statement::               Another looping statement, that provides
                                initialization and increment clauses.
* Switch Statement::            Switch/case evaluation for conditional
                                execution of statements based on a value.
* Break Statement::             Immediately exit the innermost enclosing loop.
* Continue Statement::          Skip to the end of the innermost enclosing
                                loop.
* Next Statement::              Stop processing the current input record.
* Nextfile Statement::          Stop processing the current file.
* Exit Statement::              Stop execution of 'awk'.


File: gawk.info,  Node: If Statement,  Next: While Statement,  Up: Statements

7.4.1 The 'if'-'else' Statement
-------------------------------

The 'if'-'else' statement is 'awk''s decision-making statement.  It
looks like this:

     'if (CONDITION) THEN-BODY' ['else ELSE-BODY']

The CONDITION is an expression that controls what the rest of the
statement does.  If the CONDITION is true, THEN-BODY is executed;
otherwise, ELSE-BODY is executed.  The 'else' part of the statement is
optional.  The condition is considered false if its value is zero or the
null string; otherwise, the condition is true.  Refer to the following:

     if (x % 2 == 0)
         print "x is even"
     else
         print "x is odd"

   In this example, if the expression 'x % 2 == 0' is true (i.e., if the
value of 'x' is evenly divisible by two), then the first 'print'
statement is executed; otherwise, the second 'print' statement is
executed.  If the 'else' keyword appears on the same line as THEN-BODY
and THEN-BODY is not a compound statement (i.e., not surrounded by
braces), then a semicolon must separate THEN-BODY from the 'else'.  To
illustrate this, the previous example can be rewritten as:

     if (x % 2 == 0) print "x is even"; else
             print "x is odd"

If the ';' is left out, 'awk' can't interpret the statement and it
produces a syntax error.  Don't actually write programs this way,
because a human reader might fail to see the 'else' if it is not the
first thing on its line.


File: gawk.info,  Node: While Statement,  Next: Do Statement,  Prev: If Statement,  Up: Statements

7.4.2 The 'while' Statement
---------------------------

In programming, a "loop" is a part of a program that can be executed two
or more times in succession.  The 'while' statement is the simplest
looping statement in 'awk'.  It repeatedly executes a statement as long
as a condition is true.  For example:

     while (CONDITION)
       BODY

BODY is a statement called the "body" of the loop, and CONDITION is an
expression that controls how long the loop keeps running.  The first
thing the 'while' statement does is test the CONDITION.  If the
CONDITION is true, it executes the statement BODY.  (The CONDITION is
true when the value is not zero and not a null string.)  After BODY has
been executed, CONDITION is tested again, and if it is still true, BODY
executes again.  This process repeats until the CONDITION is no longer
true.  If the CONDITION is initially false, the body of the loop never
executes and 'awk' continues with the statement following the loop.
This example prints the first three fields of each record, one per line:

     awk '
     {
         i = 1
         while (i <= 3) {
             print $i
             i++
         }
     }' inventory-shipped

The body of this loop is a compound statement enclosed in braces,
containing two statements.  The loop works in the following manner:
first, the value of 'i' is set to one.  Then, the 'while' statement
tests whether 'i' is less than or equal to three.  This is true when 'i'
equals one, so the 'i'th field is printed.  Then the 'i++' increments
the value of 'i' and the loop repeats.  The loop terminates when 'i'
reaches four.

   A newline is not required between the condition and the body;
however, using one makes the program clearer unless the body is a
compound statement or else is very simple.  The newline after the open
brace that begins the compound statement is not required either, but the
program is harder to read without it.


File: gawk.info,  Node: Do Statement,  Next: For Statement,  Prev: While Statement,  Up: Statements

7.4.3 The 'do'-'while' Statement
--------------------------------

The 'do' loop is a variation of the 'while' looping statement.  The 'do'
loop executes the BODY once and then repeats the BODY as long as the
CONDITION is true.  It looks like this:

     do
       BODY
     while (CONDITION)

   Even if the CONDITION is false at the start, the BODY executes at
least once (and only once, unless executing BODY makes CONDITION true).
Contrast this with the corresponding 'while' statement:

     while (CONDITION)
         BODY

This statement does not execute the BODY even once if the CONDITION is
false to begin with.  The following is an example of a 'do' statement:

     {
         i = 1
         do {
             print $0
             i++
         } while (i <= 10)
     }

This program prints each input record 10 times.  However, it isn't a
very realistic example, because in this case an ordinary 'while' would
do just as well.  This situation reflects actual experience; only
occasionally is there a real use for a 'do' statement.


File: gawk.info,  Node: For Statement,  Next: Switch Statement,  Prev: Do Statement,  Up: Statements

7.4.4 The 'for' Statement
-------------------------

The 'for' statement makes it more convenient to count iterations of a
loop.  The general form of the 'for' statement looks like this:

     for (INITIALIZATION; CONDITION; INCREMENT)
       BODY

The INITIALIZATION, CONDITION, and INCREMENT parts are arbitrary 'awk'
expressions, and BODY stands for any 'awk' statement.

   The 'for' statement starts by executing INITIALIZATION.  Then, as
long as the CONDITION is true, it repeatedly executes BODY and then
INCREMENT.  Typically, INITIALIZATION sets a variable to either zero or
one, INCREMENT adds one to it, and CONDITION compares it against the
desired number of iterations.  For example:

     awk '
     {
         for (i = 1; i <= 3; i++)
             print $i
     }' inventory-shipped

This prints the first three fields of each input record, with one field
per line.

   It isn't possible to set more than one variable in the INITIALIZATION
part without using a multiple assignment statement such as 'x = y = 0'.
This makes sense only if all the initial values are equal.  (But it is
possible to initialize additional variables by writing their assignments
as separate statements preceding the 'for' loop.)

   The same is true of the INCREMENT part.  Incrementing additional
variables requires separate statements at the end of the loop.  The C
compound expression, using C's comma operator, is useful in this
context, but it is not supported in 'awk'.

   Most often, INCREMENT is an increment expression, as in the previous
example.  But this is not required; it can be any expression whatsoever.
For example, the following statement prints all the powers of two
between 1 and 100:

     for (i = 1; i <= 100; i *= 2)
         print i

   If there is nothing to be done, any of the three expressions in the
parentheses following the 'for' keyword may be omitted.  Thus,
'for (; x > 0;)' is equivalent to 'while (x > 0)'.  If the CONDITION is
omitted, it is treated as true, effectively yielding an "infinite loop"
(i.e., a loop that never terminates).

   In most cases, a 'for' loop is an abbreviation for a 'while' loop, as
shown here:

     INITIALIZATION
     while (CONDITION) {
       BODY
       INCREMENT
     }

The only exception is when the 'continue' statement (*note Continue
Statement::) is used inside the loop.  Changing a 'for' statement to a
'while' statement in this way can change the effect of the 'continue'
statement inside the loop.

   The 'awk' language has a 'for' statement in addition to a 'while'
statement because a 'for' loop is often both less work to type and more
natural to think of.  Counting the number of iterations is very common
in loops.  It can be easier to think of this counting as part of looping
rather than as something to do inside the loop.

   There is an alternative version of the 'for' loop, for iterating over
all the indices of an array:

     for (i in array)
         DO SOMETHING WITH array[i]

*Note Scanning an Array:: for more information on this version of the
'for' loop.


File: gawk.info,  Node: Switch Statement,  Next: Break Statement,  Prev: For Statement,  Up: Statements

7.4.5 The 'switch' Statement
----------------------------

This minor node describes a 'gawk'-specific feature.  If 'gawk' is in
compatibility mode (*note Options::), it is not available.

   The 'switch' statement allows the evaluation of an expression and the
execution of statements based on a 'case' match.  Case statements are
checked for a match in the order they are defined.  If no suitable
'case' is found, the 'default' section is executed, if supplied.

   Each 'case' contains a single constant, be it numeric, string, or
regexp.  The 'switch' expression is evaluated, and then each 'case''s
constant is compared against the result in turn.  The type of constant
determines the comparison: numeric or string do the usual comparisons.
A regexp constant does a regular expression match against the string
value of the original expression.  The general form of the 'switch'
statement looks like this:

     switch (EXPRESSION) {
     case VALUE OR REGULAR EXPRESSION:
         CASE-BODY
     default:
         DEFAULT-BODY
     }

   Control flow in the 'switch' statement works as it does in C. Once a
match to a given case is made, the case statement bodies execute until a
'break', 'continue', 'next', 'nextfile', or 'exit' is encountered, or
the end of the 'switch' statement itself.  For example:

     while ((c = getopt(ARGC, ARGV, "aksx")) != -1) {
         switch (c) {
         case "a":
             # report size of all files
             all_files = TRUE;
             break
         case "k":
             BLOCK_SIZE = 1024       # 1K block size
             break
         case "s":
             # do sums only
             sum_only = TRUE
             break
         case "x":
             # don't cross filesystems
             fts_flags = or(fts_flags, FTS_XDEV)
             break
         case "?":
         default:
             usage()
             break
         }
     }

   Note that if none of the statements specified here halt execution of
a matched 'case' statement, execution falls through to the next 'case'
until execution halts.  In this example, the 'case' for '"?"' falls
through to the 'default' case, which is to call a function named
'usage()'.  (The 'getopt()' function being called here is described in
*note Getopt Function::.)


File: gawk.info,  Node: Break Statement,  Next: Continue Statement,  Prev: Switch Statement,  Up: Statements

7.4.6 The 'break' Statement
---------------------------

The 'break' statement jumps out of the innermost 'for', 'while', or 'do'
loop that encloses it.  The following example finds the smallest divisor
of any integer, and also identifies prime numbers:

     # find smallest divisor of num
     {
         num = $1
         for (divisor = 2; divisor * divisor <= num; divisor++) {
             if (num % divisor == 0)
                 break
         }
         if (num % divisor == 0)
             printf "Smallest divisor of %d is %d\n", num, divisor
         else
             printf "%d is prime\n", num
     }

   When the remainder is zero in the first 'if' statement, 'awk'
immediately "breaks out" of the containing 'for' loop.  This means that
'awk' proceeds immediately to the statement following the loop and
continues processing.  (This is very different from the 'exit'
statement, which stops the entire 'awk' program.  *Note Exit
Statement::.)

   The following program illustrates how the CONDITION of a 'for' or
'while' statement could be replaced with a 'break' inside an 'if':

     # find smallest divisor of num
     {
         num = $1
         for (divisor = 2; ; divisor++) {
             if (num % divisor == 0) {
                 printf "Smallest divisor of %d is %d\n", num, divisor
                 break
             }
             if (divisor * divisor > num) {
                 printf "%d is prime\n", num
                 break
             }
         }
     }

   The 'break' statement is also used to break out of the 'switch'
statement.  This is discussed in *note Switch Statement::.

   The 'break' statement has no meaning when used outside the body of a
loop or 'switch'.  However, although it was never documented, historical
implementations of 'awk' treated the 'break' statement outside of a loop
as if it were a 'next' statement (*note Next Statement::).  (d.c.)
Recent versions of BWK 'awk' no longer allow this usage, nor does
'gawk'.


File: gawk.info,  Node: Continue Statement,  Next: Next Statement,  Prev: Break Statement,  Up: Statements

7.4.7 The 'continue' Statement
------------------------------

Similar to 'break', the 'continue' statement is used only inside 'for',
'while', and 'do' loops.  It skips over the rest of the loop body,
causing the next cycle around the loop to begin immediately.  Contrast
this with 'break', which jumps out of the loop altogether.

   The 'continue' statement in a 'for' loop directs 'awk' to skip the
rest of the body of the loop and resume execution with the
increment-expression of the 'for' statement.  The following program
illustrates this fact:

     BEGIN {
          for (x = 0; x <= 20; x++) {
              if (x == 5)
                  continue
              printf "%d ", x
          }
          print ""
     }

This program prints all the numbers from 0 to 20--except for 5, for
which the 'printf' is skipped.  Because the increment 'x++' is not
skipped, 'x' does not remain stuck at 5.  Contrast the 'for' loop from
the previous example with the following 'while' loop:

     BEGIN {
          x = 0
          while (x <= 20) {
              if (x == 5)
                  continue
              printf "%d ", x
              x++
          }
          print ""
     }

This program loops forever once 'x' reaches 5, because the increment
('x++') is never reached.

   The 'continue' statement has no special meaning with respect to the
'switch' statement, nor does it have any meaning when used outside the
body of a loop.  Historical versions of 'awk' treated a 'continue'
statement outside a loop the same way they treated a 'break' statement
outside a loop: as if it were a 'next' statement (*note Next
Statement::).  (d.c.)  Recent versions of BWK 'awk' no longer work this
way, nor does 'gawk'.


File: gawk.info,  Node: Next Statement,  Next: Nextfile Statement,  Prev: Continue Statement,  Up: Statements

7.4.8 The 'next' Statement
--------------------------

The 'next' statement forces 'awk' to immediately stop processing the
current record and go on to the next record.  This means that no further
rules are executed for the current record, and the rest of the current
rule's action isn't executed.

   Contrast this with the effect of the 'getline' function (*note
Getline::).  That also causes 'awk' to read the next record immediately,
but it does not alter the flow of control in any way (i.e., the rest of
the current action executes with a new input record).

   At the highest level, 'awk' program execution is a loop that reads an
input record and then tests each rule's pattern against it.  If you
think of this loop as a 'for' statement whose body contains the rules,
then the 'next' statement is analogous to a 'continue' statement.  It
skips to the end of the body of this implicit loop and executes the
increment (which reads another record).

   For example, suppose an 'awk' program works only on records with four
fields, and it shouldn't fail when given bad input.  To avoid
complicating the rest of the program, write a "weed out" rule near the
beginning, in the following manner:

     NF != 4 {
         printf("%s:%d: skipped: NF != 4\n", FILENAME, FNR) > "/dev/stderr"
         next
     }

Because of the 'next' statement, the program's subsequent rules won't
see the bad record.  The error message is redirected to the standard
error output stream, as error messages should be.  For more detail, see
*note Special Files::.

   If the 'next' statement causes the end of the input to be reached,
then the code in any 'END' rules is executed.  *Note BEGIN/END::.

   The 'next' statement is not allowed inside 'BEGINFILE' and 'ENDFILE'
rules.  *Note BEGINFILE/ENDFILE::.

   According to the POSIX standard, the behavior is undefined if the
'next' statement is used in a 'BEGIN' or 'END' rule.  'gawk' treats it
as a syntax error.  Although POSIX does not disallow it, most other
'awk' implementations don't allow the 'next' statement inside function
bodies (*note User-defined::).  Just as with any other 'next' statement,
a 'next' statement inside a function body reads the next record and
starts processing it with the first rule in the program.


File: gawk.info,  Node: Nextfile Statement,  Next: Exit Statement,  Prev: Next Statement,  Up: Statements

7.4.9 The 'nextfile' Statement
------------------------------

The 'nextfile' statement is similar to the 'next' statement.  However,
instead of abandoning processing of the current record, the 'nextfile'
statement instructs 'awk' to stop processing the current data file.

   Upon execution of the 'nextfile' statement, 'FILENAME' is updated to
the name of the next data file listed on the command line, 'FNR' is
reset to one, and processing starts over with the first rule in the
program.  If the 'nextfile' statement causes the end of the input to be
reached, then the code in any 'END' rules is executed.  An exception to
this is when 'nextfile' is invoked during execution of any statement in
an 'END' rule; in this case, it causes the program to stop immediately.
*Note BEGIN/END::.

   The 'nextfile' statement is useful when there are many data files to
process but it isn't necessary to process every record in every file.
Without 'nextfile', in order to move on to the next data file, a program
would have to continue scanning the unwanted records.  The 'nextfile'
statement accomplishes this much more efficiently.

   In 'gawk', execution of 'nextfile' causes additional things to
happen: any 'ENDFILE' rules are executed if 'gawk' is not currently in
an 'END' or 'BEGINFILE' rule, 'ARGIND' is incremented, and any
'BEGINFILE' rules are executed.  ('ARGIND' hasn't been introduced yet.
*Note Built-in Variables::.)

   With 'gawk', 'nextfile' is useful inside a 'BEGINFILE' rule to skip
over a file that would otherwise cause 'gawk' to exit with a fatal
error.  In this case, 'ENDFILE' rules are not executed.  *Note
BEGINFILE/ENDFILE::.

   Although it might seem that 'close(FILENAME)' would accomplish the
same as 'nextfile', this isn't true.  'close()' is reserved for closing
files, pipes, and coprocesses that are opened with redirections.  It is
not related to the main processing that 'awk' does with the files listed
in 'ARGV'.

     NOTE: For many years, 'nextfile' was a common extension.  In
     September 2012, it was accepted for inclusion into the POSIX
     standard.  See the Austin Group website
     (http://austingroupbugs.net/view.php?id=607).

   The current version of BWK 'awk' and 'mawk' also support 'nextfile'.
However, they don't allow the 'nextfile' statement inside function
bodies (*note User-defined::).  'gawk' does; a 'nextfile' inside a
function body reads the first record from the next file and starts
processing it with the first rule in the program, just as any other
'nextfile' statement.


File: gawk.info,  Node: Exit Statement,  Prev: Nextfile Statement,  Up: Statements

7.4.10 The 'exit' Statement
---------------------------

The 'exit' statement causes 'awk' to immediately stop executing the
current rule and to stop processing input; any remaining input is
ignored.  The 'exit' statement is written as follows:

     'exit' [RETURN CODE]

   When an 'exit' statement is executed from a 'BEGIN' rule, the program
stops processing everything immediately.  No input records are read.
However, if an 'END' rule is present, as part of executing the 'exit'
statement, the 'END' rule is executed (*note BEGIN/END::).  If 'exit' is
used in the body of an 'END' rule, it causes the program to stop
immediately.

   An 'exit' statement that is not part of a 'BEGIN' or 'END' rule stops
the execution of any further automatic rules for the current record,
skips reading any remaining input records, and executes the 'END' rule
if there is one.  'gawk' also skips any 'ENDFILE' rules; they do not
execute.

   In such a case, if you don't want the 'END' rule to do its job, set a
variable to a nonzero value before the 'exit' statement and check that
variable in the 'END' rule.  *Note Assert Function:: for an example that
does this.

   If an argument is supplied to 'exit', its value is used as the exit
status code for the 'awk' process.  If no argument is supplied, 'exit'
causes 'awk' to return a "success" status.  In the case where an
argument is supplied to a first 'exit' statement, and then 'exit' is
called a second time from an 'END' rule with no argument, 'awk' uses the
previously supplied exit value.  (d.c.)  *Note Exit Status:: for more
information.

   For example, suppose an error condition occurs that is difficult or
impossible to handle.  Conventionally, programs report this by exiting
with a nonzero status.  An 'awk' program can do this using an 'exit'
statement with a nonzero argument, as shown in the following example:

     BEGIN {
         if (("date" | getline date_now) <= 0) {
             print "Can't get system date" > "/dev/stderr"
             exit 1
         }
         print "current date is", date_now
         close("date")
     }

     NOTE: For full portability, exit values should be between zero and
     126, inclusive.  Negative values, and values of 127 or greater, may
     not produce consistent results across different operating systems.


File: gawk.info,  Node: Built-in Variables,  Next: Pattern Action Summary,  Prev: Statements,  Up: Patterns and Actions

7.5 Predefined Variables
========================

Most 'awk' variables are available to use for your own purposes; they
never change unless your program assigns values to them, and they never
affect anything unless your program examines them.  However, a few
variables in 'awk' have special built-in meanings.  'awk' examines some
of these automatically, so that they enable you to tell 'awk' how to do
certain things.  Others are set automatically by 'awk', so that they
carry information from the internal workings of 'awk' to your program.

   This minor node documents all of 'gawk''s predefined variables, most
of which are also documented in the major nodes describing their areas
of activity.

* Menu:

* User-modified::               Built-in variables that you change to control
                                'awk'.
* Auto-set::                    Built-in variables where 'awk' gives
                                you information.
* ARGC and ARGV::               Ways to use 'ARGC' and 'ARGV'.


File: gawk.info,  Node: User-modified,  Next: Auto-set,  Up: Built-in Variables

7.5.1 Built-in Variables That Control 'awk'
-------------------------------------------

The following is an alphabetical list of variables that you can change
to control how 'awk' does certain things.

   The variables that are specific to 'gawk' are marked with a pound
sign ('#').  These variables are 'gawk' extensions.  In other 'awk'
implementations or if 'gawk' is in compatibility mode (*note Options::),
they are not special.  (Any exceptions are noted in the description of
each variable.)

'BINMODE #'
     On non-POSIX systems, this variable specifies use of binary mode
     for all I/O. Numeric values of one, two, or three specify that
     input files, output files, or all files, respectively, should use
     binary I/O. A numeric value less than zero is treated as zero, and
     a numeric value greater than three is treated as three.
     Alternatively, string values of '"r"' or '"w"' specify that input
     files and output files, respectively, should use binary I/O. A
     string value of '"rw"' or '"wr"' indicates that all files should
     use binary I/O. Any other string value is treated the same as
     '"rw"', but causes 'gawk' to generate a warning message.  'BINMODE'
     is described in more detail in *note PC Using::.  'mawk' (*note
     Other Versions::) also supports this variable, but only using
     numeric values.

'CONVFMT'
     A string that controls the conversion of numbers to strings (*note
     Conversion::).  It works by being passed, in effect, as the first
     argument to the 'sprintf()' function (*note String Functions::).
     Its default value is '"%.6g"'.  'CONVFMT' was introduced by the
     POSIX standard.

'FIELDWIDTHS #'
     A space-separated list of columns that tells 'gawk' how to split
     input with fixed columnar boundaries.  Assigning a value to
     'FIELDWIDTHS' overrides the use of 'FS' and 'FPAT' for field
     splitting.  *Note Constant Size:: for more information.

'FPAT #'
     A regular expression (as a string) that tells 'gawk' to create the
     fields based on text that matches the regular expression.
     Assigning a value to 'FPAT' overrides the use of 'FS' and
     'FIELDWIDTHS' for field splitting.  *Note Splitting By Content::
     for more information.

'FS'
     The input field separator (*note Field Separators::).  The value is
     a single-character string or a multicharacter regular expression
     that matches the separations between fields in an input record.  If
     the value is the null string ('""'), then each character in the
     record becomes a separate field.  (This behavior is a 'gawk'
     extension.  POSIX 'awk' does not specify the behavior when 'FS' is
     the null string.  Nonetheless, some other versions of 'awk' also
     treat '""' specially.)

     The default value is '" "', a string consisting of a single space.
     As a special exception, this value means that any sequence of
     spaces, TABs, and/or newlines is a single separator.  It also
     causes spaces, TABs, and newlines at the beginning and end of a
     record to be ignored.

     You can set the value of 'FS' on the command line using the '-F'
     option:

          awk -F, 'PROGRAM' INPUT-FILES

     If 'gawk' is using 'FIELDWIDTHS' or 'FPAT' for field splitting,
     assigning a value to 'FS' causes 'gawk' to return to the normal,
     'FS'-based field splitting.  An easy way to do this is to simply
     say 'FS = FS', perhaps with an explanatory comment.

'IGNORECASE #'
     If 'IGNORECASE' is nonzero or non-null, then all string comparisons
     and all regular expression matching are case-independent.  This
     applies to regexp matching with '~' and '!~', the 'gensub()',
     'gsub()', 'index()', 'match()', 'patsplit()', 'split()', and
     'sub()' functions, record termination with 'RS', and field
     splitting with 'FS' and 'FPAT'.  However, the value of 'IGNORECASE'
     does _not_ affect array subscripting and it does not affect field
     splitting when using a single-character field separator.  *Note
     Case-sensitivity::.

'LINT #'
     When this variable is true (nonzero or non-null), 'gawk' behaves as
     if the '--lint' command-line option is in effect (*note Options::).
     With a value of '"fatal"', lint warnings become fatal errors.  With
     a value of '"invalid"', only warnings about things that are
     actually invalid are issued.  (This is not fully implemented yet.)
     Any other true value prints nonfatal warnings.  Assigning a false
     value to 'LINT' turns off the lint warnings.

     This variable is a 'gawk' extension.  It is not special in other
     'awk' implementations.  Unlike with the other special variables,
     changing 'LINT' does affect the production of lint warnings, even
     if 'gawk' is in compatibility mode.  Much as the '--lint' and
     '--traditional' options independently control different aspects of
     'gawk''s behavior, the control of lint warnings during program
     execution is independent of the flavor of 'awk' being executed.

'OFMT'
     A string that controls conversion of numbers to strings (*note
     Conversion::) for printing with the 'print' statement.  It works by
     being passed as the first argument to the 'sprintf()' function
     (*note String Functions::).  Its default value is '"%.6g"'.
     Earlier versions of 'awk' used 'OFMT' to specify the format for
     converting numbers to strings in general expressions; this is now
     done by 'CONVFMT'.

'OFS'
     The output field separator (*note Output Separators::).  It is
     output between the fields printed by a 'print' statement.  Its
     default value is '" "', a string consisting of a single space.

'ORS'
     The output record separator.  It is output at the end of every
     'print' statement.  Its default value is '"\n"', the newline
     character.  (*Note Output Separators::.)

'PREC #'
     The working precision of arbitrary-precision floating-point
     numbers, 53 bits by default (*note Setting precision::).

'ROUNDMODE #'
     The rounding mode to use for arbitrary-precision arithmetic on
     numbers, by default '"N"' ('roundTiesToEven' in the IEEE 754
     standard; *note Setting the rounding mode::).

'RS'
     The input record separator.  Its default value is a string
     containing a single newline character, which means that an input
     record consists of a single line of text.  It can also be the null
     string, in which case records are separated by runs of blank lines.
     If it is a regexp, records are separated by matches of the regexp
     in the input text.  (*Note Records::.)

     The ability for 'RS' to be a regular expression is a 'gawk'
     extension.  In most other 'awk' implementations, or if 'gawk' is in
     compatibility mode (*note Options::), just the first character of
     'RS''s value is used.

'SUBSEP'
     The subscript separator.  It has the default value of '"\034"' and
     is used to separate the parts of the indices of a multidimensional
     array.  Thus, the expression 'foo["A", "B"]' really accesses
     'foo["A\034B"]' (*note Multidimensional::).

'TEXTDOMAIN #'
     Used for internationalization of programs at the 'awk' level.  It
     sets the default text domain for specially marked string constants
     in the source text, as well as for the 'dcgettext()',
     'dcngettext()', and 'bindtextdomain()' functions (*note
     Internationalization::).  The default value of 'TEXTDOMAIN' is
     '"messages"'.


File: gawk.info,  Node: Auto-set,  Next: ARGC and ARGV,  Prev: User-modified,  Up: Built-in Variables

7.5.2 Built-in Variables That Convey Information
------------------------------------------------

The following is an alphabetical list of variables that 'awk' sets
automatically on certain occasions in order to provide information to
your program.

   The variables that are specific to 'gawk' are marked with a pound
sign ('#').  These variables are 'gawk' extensions.  In other 'awk'
implementations or if 'gawk' is in compatibility mode (*note Options::),
they are not special:

'ARGC', 'ARGV'
     The command-line arguments available to 'awk' programs are stored
     in an array called 'ARGV'.  'ARGC' is the number of command-line
     arguments present.  *Note Other Arguments::.  Unlike most 'awk'
     arrays, 'ARGV' is indexed from 0 to 'ARGC' - 1.  In the following
     example:

          $ awk 'BEGIN {
          >         for (i = 0; i < ARGC; i++)
          >             print ARGV[i]
          >      }' inventory-shipped mail-list
          -| awk
          -| inventory-shipped
          -| mail-list

     'ARGV[0]' contains 'awk', 'ARGV[1]' contains 'inventory-shipped',
     and 'ARGV[2]' contains 'mail-list'.  The value of 'ARGC' is three,
     one more than the index of the last element in 'ARGV', because the
     elements are numbered from zero.

     The names 'ARGC' and 'ARGV', as well as the convention of indexing
     the array from 0 to 'ARGC' - 1, are derived from the C language's
     method of accessing command-line arguments.

     The value of 'ARGV[0]' can vary from system to system.  Also, you
     should note that the program text is _not_ included in 'ARGV', nor
     are any of 'awk''s command-line options.  *Note ARGC and ARGV:: for
     information about how 'awk' uses these variables.  (d.c.)

'ARGIND #'
     The index in 'ARGV' of the current file being processed.  Every
     time 'gawk' opens a new data file for processing, it sets 'ARGIND'
     to the index in 'ARGV' of the file name.  When 'gawk' is processing
     the input files, 'FILENAME == ARGV[ARGIND]' is always true.

     This variable is useful in file processing; it allows you to tell
     how far along you are in the list of data files as well as to
     distinguish between successive instances of the same file name on
     the command line.

     While you can change the value of 'ARGIND' within your 'awk'
     program, 'gawk' automatically sets it to a new value when it opens
     the next file.

'ENVIRON'
     An associative array containing the values of the environment.  The
     array indices are the environment variable names; the elements are
     the values of the particular environment variables.  For example,
     'ENVIRON["HOME"]' might be '/home/arnold'.

     For POSIX 'awk', changing this array does not affect the
     environment passed on to any programs that 'awk' may spawn via
     redirection or the 'system()' function.

     However, beginning with version 4.2, if not in POSIX compatibility
     mode, 'gawk' does update its own environment when 'ENVIRON' is
     changed, thus changing the environment seen by programs that it
     creates.  You should therefore be especially careful if you modify
     'ENVIRON["PATH"]', which is the search path for finding executable
     programs.

     This can also affect the running 'gawk' program, since some of the
     built-in functions may pay attention to certain environment
     variables.  The most notable instance of this is 'mktime()' (*note
     Time Functions::), which pays attention the value of the 'TZ'
     environment variable on many systems.

     Some operating systems may not have environment variables.  On such
     systems, the 'ENVIRON' array is empty (except for
     'ENVIRON["AWKPATH"]' and 'ENVIRON["AWKLIBPATH"]'; *note AWKPATH
     Variable:: and *note AWKLIBPATH Variable::).

'ERRNO #'
     If a system error occurs during a redirection for 'getline', during
     a read for 'getline', or during a 'close()' operation, then 'ERRNO'
     contains a string describing the error.

     In addition, 'gawk' clears 'ERRNO' before opening each command-line
     input file.  This enables checking if the file is readable inside a
     'BEGINFILE' pattern (*note BEGINFILE/ENDFILE::).

     Otherwise, 'ERRNO' works similarly to the C variable 'errno'.
     Except for the case just mentioned, 'gawk' _never_ clears it (sets
     it to zero or '""').  Thus, you should only expect its value to be
     meaningful when an I/O operation returns a failure value, such as
     'getline' returning -1.  You are, of course, free to clear it
     yourself before doing an I/O operation.

     If the value of 'ERRNO' corresponds to a system error in the C
     'errno' variable, then 'PROCINFO["errno"]' will be set to the value
     of 'errno'.  For non-system errors, 'PROCINFO["errno"]' will be
     zero.

'FILENAME'
     The name of the current input file.  When no data files are listed
     on the command line, 'awk' reads from the standard input and
     'FILENAME' is set to '"-"'.  'FILENAME' changes each time a new
     file is read (*note Reading Files::).  Inside a 'BEGIN' rule, the
     value of 'FILENAME' is '""', because there are no input files being
     processed yet.(1)  (d.c.)  Note, though, that using 'getline'
     (*note Getline::) inside a 'BEGIN' rule can give 'FILENAME' a
     value.

'FNR'
     The current record number in the current file.  'awk' increments
     'FNR' each time it reads a new record (*note Records::).  'awk'
     resets 'FNR' to zero each time it starts a new input file.

'NF'
     The number of fields in the current input record.  'NF' is set each
     time a new record is read, when a new field is created, or when
     '$0' changes (*note Fields::).

     Unlike most of the variables described in this node, assigning a
     value to 'NF' has the potential to affect 'awk''s internal
     workings.  In particular, assignments to 'NF' can be used to create
     fields in or remove fields from the current record.  *Note Changing
     Fields::.

'FUNCTAB #'
     An array whose indices and corresponding values are the names of
     all the built-in, user-defined, and extension functions in the
     program.

          NOTE: Attempting to use the 'delete' statement with the
          'FUNCTAB' array causes a fatal error.  Any attempt to assign
          to an element of 'FUNCTAB' also causes a fatal error.

'NR'
     The number of input records 'awk' has processed since the beginning
     of the program's execution (*note Records::).  'awk' increments
     'NR' each time it reads a new record.

'PROCINFO #'
     The elements of this array provide access to information about the
     running 'awk' program.  The following elements (listed
     alphabetically) are guaranteed to be available:

     'PROCINFO["egid"]'
          The value of the 'getegid()' system call.

     'PROCINFO["errno"]'
          The value of the C 'errno' variable when 'ERRNO' is set to the
          associated error message.

     'PROCINFO["euid"]'
          The value of the 'geteuid()' system call.

     'PROCINFO["FS"]'
          This is '"FS"' if field splitting with 'FS' is in effect,
          '"FIELDWIDTHS"' if field splitting with 'FIELDWIDTHS' is in
          effect, or '"FPAT"' if field matching with 'FPAT' is in
          effect.

     'PROCINFO["gid"]'
          The value of the 'getgid()' system call.

     'PROCINFO["identifiers"]'
          A subarray, indexed by the names of all identifiers used in
          the text of the 'awk' program.  An "identifier" is simply the
          name of a variable (be it scalar or array), built-in function,
          user-defined function, or extension function.  For each
          identifier, the value of the element is one of the following:

          '"array"'
               The identifier is an array.

          '"builtin"'
               The identifier is a built-in function.

          '"extension"'
               The identifier is an extension function loaded via
               '@load' or '-l'.

          '"scalar"'
               The identifier is a scalar.

          '"untyped"'
               The identifier is untyped (could be used as a scalar or
               an array; 'gawk' doesn't know yet).

          '"user"'
               The identifier is a user-defined function.

          The values indicate what 'gawk' knows about the identifiers
          after it has finished parsing the program; they are _not_
          updated while the program runs.

     'PROCINFO["pgrpid"]'
          The process group ID of the current process.

     'PROCINFO["pid"]'
          The process ID of the current process.

     'PROCINFO["ppid"]'
          The parent process ID of the current process.

     'PROCINFO["strftime"]'
          The default time format string for 'strftime()'.  Assigning a
          new value to this element changes the default.  *Note Time
          Functions::.

     'PROCINFO["uid"]'
          The value of the 'getuid()' system call.

     'PROCINFO["version"]'
          The version of 'gawk'.

     The following additional elements in the array are available to
     provide information about the MPFR and GMP libraries if your
     version of 'gawk' supports arbitrary-precision arithmetic (*note
     Arbitrary Precision Arithmetic::):

     'PROCINFO["gmp_version"]'
          The version of the GNU MP library.

     'PROCINFO["mpfr_version"]'
          The version of the GNU MPFR library.

     'PROCINFO["prec_max"]'
          The maximum precision supported by MPFR.

     'PROCINFO["prec_min"]'
          The minimum precision required by MPFR.

     The following additional elements in the array are available to
     provide information about the version of the extension API, if your
     version of 'gawk' supports dynamic loading of extension functions
     (*note Dynamic Extensions::):

     'PROCINFO["api_major"]'
          The major version of the extension API.

     'PROCINFO["api_minor"]'
          The minor version of the extension API.

     On some systems, there may be elements in the array, '"group1"'
     through '"groupN"' for some N.  N is the number of supplementary
     groups that the process has.  Use the 'in' operator to test for
     these elements (*note Reference to Elements::).

     The following elements allow you to change 'gawk''s behavior:

     'PROCINFO["NONFATAL"]'
          If this element exists, then I/O errors for all output
          redirections become nonfatal.  *Note Nonfatal::.

     'PROCINFO["OUTPUT_NAME", "NONFATAL"]'
          Make output errors for OUTPUT_NAME be nonfatal.  *Note
          Nonfatal::.

     'PROCINFO["COMMAND", "pty"]'
          For two-way communication to COMMAND, use a pseudo-tty instead
          of setting up a two-way pipe.  *Note Two-way I/O:: for more
          information.

     'PROCINFO["INPUT_NAME", "READ_TIMEOUT"]'
          Set a timeout for reading from input redirection INPUT_NAME.
          *Note Read Timeout:: for more information.

     'PROCINFO["sorted_in"]'
          If this element exists in 'PROCINFO', its value controls the
          order in which array indices will be processed by 'for (INDX
          in ARRAY)' loops.  This is an advanced feature, so we defer
          the full description until later; see *note Scanning an
          Array::.

'RLENGTH'
     The length of the substring matched by the 'match()' function
     (*note String Functions::).  'RLENGTH' is set by invoking the
     'match()' function.  Its value is the length of the matched string,
     or -1 if no match is found.

'RSTART'
     The start index in characters of the substring that is matched by
     the 'match()' function (*note String Functions::).  'RSTART' is set
     by invoking the 'match()' function.  Its value is the position of
     the string where the matched substring starts, or zero if no match
     was found.

'RT #'
     The input text that matched the text denoted by 'RS', the record
     separator.  It is set every time a record is read.

'SYMTAB #'
     An array whose indices are the names of all defined global
     variables and arrays in the program.  'SYMTAB' makes 'gawk''s
     symbol table visible to the 'awk' programmer.  It is built as
     'gawk' parses the program and is complete before the program starts
     to run.

     The array may be used for indirect access to read or write the
     value of a variable:

          foo = 5
          SYMTAB["foo"] = 4
          print foo    # prints 4

     The 'isarray()' function (*note Type Functions::) may be used to
     test if an element in 'SYMTAB' is an array.  Also, you may not use
     the 'delete' statement with the 'SYMTAB' array.

     You may use an index for 'SYMTAB' that is not a predefined
     identifier:

          SYMTAB["xxx"] = 5
          print SYMTAB["xxx"]

     This works as expected: in this case 'SYMTAB' acts just like a
     regular array.  The only difference is that you can't then delete
     'SYMTAB["xxx"]'.

     The 'SYMTAB' array is more interesting than it looks.  Andrew
     Schorr points out that it effectively gives 'awk' data pointers.
     Consider his example:

          # Indirect multiply of any variable by amount, return result

          function multiply(variable, amount)
          {
              return SYMTAB[variable] *= amount
          }

     You would use it like this:

          BEGIN {
              answer = 10.5
              multiply("answer", 4)
              print "The answer is", answer
          }

     When run, this produces:

          $ gawk -f answer.awk
          -| The answer is 42

          NOTE: In order to avoid severe time-travel paradoxes,(2)
          neither 'FUNCTAB' nor 'SYMTAB' is available as an element
          within the 'SYMTAB' array.

                        Changing 'NR' and 'FNR'

   'awk' increments 'NR' and 'FNR' each time it reads a record, instead
of setting them to the absolute value of the number of records read.
This means that a program can change these variables and their new
values are incremented for each record.  (d.c.)  The following example
shows this:

     $ echo '1
     > 2
     > 3
     > 4' | awk 'NR == 2 { NR = 17 }
     > { print NR }'
     -| 1
     -| 17
     -| 18
     -| 19

Before 'FNR' was added to the 'awk' language (*note V7/SVR3.1::), many
'awk' programs used this feature to track the number of records in a
file by resetting 'NR' to zero when 'FILENAME' changed.

   ---------- Footnotes ----------

   (1) Some early implementations of Unix 'awk' initialized 'FILENAME'
to '"-"', even if there were data files to be processed.  This behavior
was incorrect and should not be relied upon in your programs.

   (2) Not to mention difficult implementation issues.


File: gawk.info,  Node: ARGC and ARGV,  Prev: Auto-set,  Up: Built-in Variables

7.5.3 Using 'ARGC' and 'ARGV'
-----------------------------

*note Auto-set:: presented the following program describing the
information contained in 'ARGC' and 'ARGV':

     $ awk 'BEGIN {
     >        for (i = 0; i < ARGC; i++)
     >            print ARGV[i]
     >      }' inventory-shipped mail-list
     -| awk
     -| inventory-shipped
     -| mail-list

In this example, 'ARGV[0]' contains 'awk', 'ARGV[1]' contains
'inventory-shipped', and 'ARGV[2]' contains 'mail-list'.  Notice that
the 'awk' program is not entered in 'ARGV'.  The other command-line
options, with their arguments, are also not entered.  This includes
variable assignments done with the '-v' option (*note Options::).
Normal variable assignments on the command line _are_ treated as
arguments and do show up in the 'ARGV' array.  Given the following
program in a file named 'showargs.awk':

     BEGIN {
         printf "A=%d, B=%d\n", A, B
         for (i = 0; i < ARGC; i++)
             printf "\tARGV[%d] = %s\n", i, ARGV[i]
     }
     END   { printf "A=%d, B=%d\n", A, B }

Running it produces the following:

     $ awk -v A=1 -f showargs.awk B=2 /dev/null
     -| A=1, B=0
     -|        ARGV[0] = awk
     -|        ARGV[1] = B=2
     -|        ARGV[2] = /dev/null
     -| A=1, B=2

   A program can alter 'ARGC' and the elements of 'ARGV'.  Each time
'awk' reaches the end of an input file, it uses the next element of
'ARGV' as the name of the next input file.  By storing a different
string there, a program can change which files are read.  Use '"-"' to
represent the standard input.  Storing additional elements and
incrementing 'ARGC' causes additional files to be read.

   If the value of 'ARGC' is decreased, that eliminates input files from
the end of the list.  By recording the old value of 'ARGC' elsewhere, a
program can treat the eliminated arguments as something other than file
names.

   To eliminate a file from the middle of the list, store the null
string ('""') into 'ARGV' in place of the file's name.  As a special
feature, 'awk' ignores file names that have been replaced with the null
string.  Another option is to use the 'delete' statement to remove
elements from 'ARGV' (*note Delete::).

   All of these actions are typically done in the 'BEGIN' rule, before
actual processing of the input begins.  *Note Split Program:: and *note
Tee Program:: for examples of each way of removing elements from 'ARGV'.

   To actually get options into an 'awk' program, end the 'awk' options
with '--' and then supply the 'awk' program's options, in the following
manner:

     awk -f myprog.awk -- -v -q file1 file2 ...

   The following fragment processes 'ARGV' in order to examine, and then
remove, the previously mentioned command-line options:

     BEGIN {
         for (i = 1; i < ARGC; i++) {
             if (ARGV[i] == "-v")
                 verbose = 1
             else if (ARGV[i] == "-q")
                 debug = 1
             else if (ARGV[i] ~ /^-./) {
                 e = sprintf("%s: unrecognized option -- %c",
                         ARGV[0], substr(ARGV[i], 2, 1))
                 print e > "/dev/stderr"
             } else
                 break
             delete ARGV[i]
         }
     }

   Ending the 'awk' options with '--' isn't necessary in 'gawk'.  Unless
'--posix' has been specified, 'gawk' silently puts any unrecognized
options into 'ARGV' for the 'awk' program to deal with.  As soon as it
sees an unknown option, 'gawk' stops looking for other options that it
might otherwise recognize.  The previous command line with 'gawk' would
be:

     gawk -f myprog.awk -q -v file1 file2 ...

Because '-q' is not a valid 'gawk' option, it and the following '-v' are
passed on to the 'awk' program.  (*Note Getopt Function:: for an 'awk'
library function that parses command-line options.)

   When designing your program, you should choose options that don't
conflict with 'gawk''s, because it will process any options that it
accepts before passing the rest of the command line on to your program.
Using '#!' with the '-E' option may help (*note Executable Scripts:: and
*note Options::).


File: gawk.info,  Node: Pattern Action Summary,  Prev: Built-in Variables,  Up: Patterns and Actions

7.6 Summary
===========

   * Pattern-action pairs make up the basic elements of an 'awk'
     program.  Patterns are either normal expressions, range
     expressions, or regexp constants; one of the special keywords
     'BEGIN', 'END', 'BEGINFILE', or 'ENDFILE'; or empty.  The action
     executes if the current record matches the pattern.  Empty
     (missing) patterns match all records.

   * I/O from 'BEGIN' and 'END' rules has certain constraints.  This is
     also true, only more so, for 'BEGINFILE' and 'ENDFILE' rules.  The
     latter two give you "hooks" into 'gawk''s file processing, allowing
     you to recover from a file that otherwise would cause a fatal error
     (such as a file that cannot be opened).

   * Shell variables can be used in 'awk' programs by careful use of
     shell quoting.  It is easier to pass a shell variable into 'awk' by
     using the '-v' option and an 'awk' variable.

   * Actions consist of statements enclosed in curly braces.  Statements
     are built up from expressions, control statements, compound
     statements, input and output statements, and deletion statements.

   * The control statements in 'awk' are 'if'-'else', 'while', 'for',
     and 'do'-'while'.  'gawk' adds the 'switch' statement.  There are
     two flavors of 'for' statement: one for performing general looping,
     and the other for iterating through an array.

   * 'break' and 'continue' let you exit early or start the next
     iteration of a loop (or get out of a 'switch').

   * 'next' and 'nextfile' let you read the next record and start over
     at the top of your program or skip to the next input file and start
     over, respectively.

   * The 'exit' statement terminates your program.  When executed from
     an action (or function body), it transfers control to the 'END'
     statements.  From an 'END' statement body, it exits immediately.
     You may pass an optional numeric value to be used as 'awk''s exit
     status.

   * Some predefined variables provide control over 'awk', mainly for
     I/O. Other variables convey information from 'awk' to your program.

   * 'ARGC' and 'ARGV' make the command-line arguments available to your
     program.  Manipulating them from a 'BEGIN' rule lets you control
     how 'awk' will process the provided data files.


File: gawk.info,  Node: Arrays,  Next: Functions,  Prev: Patterns and Actions,  Up: Top

8 Arrays in 'awk'
*****************

An "array" is a table of values called "elements".  The elements of an
array are distinguished by their "indices".  Indices may be either
numbers or strings.

   This major node describes how arrays work in 'awk', how to use array
elements, how to scan through every element in an array, and how to
remove array elements.  It also describes how 'awk' simulates
multidimensional arrays, as well as some of the less obvious points
about array usage.  The major node moves on to discuss 'gawk''s facility
for sorting arrays, and ends with a brief description of 'gawk''s
ability to support true arrays of arrays.

* Menu:

* Array Basics::                The basics of arrays.
* Numeric Array Subscripts::    How to use numbers as subscripts in
                                'awk'.
* Uninitialized Subscripts::    Using Uninitialized variables as subscripts.
* Delete::                      The 'delete' statement removes an element
                                from an array.
* Multidimensional::            Emulating multidimensional arrays in
                                'awk'.
* Arrays of Arrays::            True multidimensional arrays.
* Arrays Summary::              Summary of arrays.


File: gawk.info,  Node: Array Basics,  Next: Numeric Array Subscripts,  Up: Arrays

8.1 The Basics of Arrays
========================

This minor node presents the basics: working with elements in arrays one
at a time, and traversing all of the elements in an array.

* Menu:

* Array Intro::                 Introduction to Arrays
* Reference to Elements::       How to examine one element of an array.
* Assigning Elements::          How to change an element of an array.
* Array Example::               Basic Example of an Array
* Scanning an Array::           A variation of the 'for' statement. It
                                loops through the indices of an array's
                                existing elements.
* Controlling Scanning::        Controlling the order in which arrays are
                                scanned.


File: gawk.info,  Node: Array Intro,  Next: Reference to Elements,  Up: Array Basics

8.1.1 Introduction to Arrays
----------------------------

     Doing linear scans over an associative array is like trying to club
     someone to death with a loaded Uzi.
                            -- _Larry Wall_

   The 'awk' language provides one-dimensional arrays for storing groups
of related strings or numbers.  Every 'awk' array must have a name.
Array names have the same syntax as variable names; any valid variable
name would also be a valid array name.  But one name cannot be used in
both ways (as an array and as a variable) in the same 'awk' program.

   Arrays in 'awk' superficially resemble arrays in other programming
languages, but there are fundamental differences.  In 'awk', it isn't
necessary to specify the size of an array before starting to use it.
Additionally, any number or string, not just consecutive integers, may
be used as an array index.

   In most other languages, arrays must be "declared" before use,
including a specification of how many elements or components they
contain.  In such languages, the declaration causes a contiguous block
of memory to be allocated for that many elements.  Usually, an index in
the array must be a nonnegative integer.  For example, the index zero
specifies the first element in the array, which is actually stored at
the beginning of the block of memory.  Index one specifies the second
element, which is stored in memory right after the first element, and so
on.  It is impossible to add more elements to the array, because it has
room only for as many elements as given in the declaration.  (Some
languages allow arbitrary starting and ending indices--e.g., '15 ..
27'--but the size of the array is still fixed when the array is
declared.)

   A contiguous array of four elements might look like *note Figure 8.1:
figure-array-elements, conceptually, if the element values are eight,
'"foo"', '""', and 30.