Opensuse103 Reference
Opensuse103 Reference
Opensuse103 Reference
10.3
September27,2007
www.novell.com
Reference
Reference
Copyright 2006-2007 Novell, Inc.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU
Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with the Invariant Section being this copyright notice and license. A copy of the license is included in the section entitled GNU Free Documentation License.
SUSE, openSUSE, the openSUSE logo, Novell, the Novell logo, the N logo, are registered
trademarks of Novell, Inc. in the United States and other countries. Linux* is a registered trademark
of Linus Torvalds. All other third party trademarks are the property of their respective owners. A
trademark symbol ( , , etc.) denotes a Novell trademark; an asterisk (*) denotes a third-party
trademark.
All information found in this book has been compiled with utmost attention to detail. However, this
does not guarantee complete accuracy. Neither Novell, Inc., SUSE LINUX Products GmbH, the authors,
nor the translators shall be held liable for possible errors or the consequences thereof.
Contents
xiii
1 Remote Installation
1.1
1.2
1.3
1.4
1.5
3
12
22
31
36
41
41
49
55
Part II Administration
61
3 Online Update
63
3.1
3.2
Navigation in Modules . . . . . . . . . . . . . . . . . . . . . . .
Restriction of Key Combinations . . . . . . . . . . . . . . . . . . .
63
66
69
70
71
4.3
75
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Basic Components . . . . . . . . .
Setting Up a Virtual Machine Host . .
Setting Up Virtual Machines . . . . .
Managing a Virtualization Environment
117
123
129
131
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
99
99
100
100
103
104
104
108
117
9 Xen Virtualization
9.1
9.2
9.3
9.4
86
86
87
89
90
93
95
96
97
75
78
85
7 Printer Operation
7.1
7.2
7.3
7.4
7.5
7.6
7.7
7.8
72
Debugging . . . . . . . . . . . . . . . . . . . . . . . . . . .
131
135
139
146
151
152
10.2
10.3
10.4
10.5
10.6
10.7
10.8
10.9
154
156
159
160
163
167
171
171
173
175
11.1
11.2
11.3
11.4
Runtime Support . . . . . . . . . . .
Software Development . . . . . . . . .
Software Compilation on Biarch Platforms .
Kernel Specifications . . . . . . . . . .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
.
.
.
.
179
179
183
192
195
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
175
176
176
178
196
196
205
209
209
210
211
213
215
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
215
222
222
223
227
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
udev Rules
. . . . .
. . . . .
. . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Terminology . . . . . . .
Major File Systems in Linux
Some Other Supported File
Large File Support in Linux
For More Information . .
241
. . . .
. . . .
Systems
. . . .
. . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
251
253
253
254
262
262
263
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
241
242
247
248
249
251
227
228
228
229
229
231
237
238
239
264
266
268
269
271
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
271
277
280
291
Part IV Services
297
2 0 Basic Networking
299
20.1
20.2
20.3
20.4
20.5
20.6
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Installation . . . . . . .
Activating SLP . . . . . .
SLP Front-Ends in openSUSE
Installation over SLP . . .
Providing Services via SLP .
For More Information . .
.
.
.
.
.
.
349
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
DNS Terminology . . . . . . . . . . . . . . . . . . . . . . . .
Installation . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuration with YaST . . . . . . . . . . . . . . . . . . . . . .
Starting the Name Server BIND . . . . . . . . . . . . . . . . . .
The Configuration File /etc/named.conf . . . . . . . . . . . . . . .
Zone Files . . . . . . . . . . . . . . . . . . . . . . . . . . .
Dynamic Update of Zone Data . . . . . . . . . . . . . . . . . . .
Secure Transactions . . . . . . . . . . . . . . . . . . . . . . .
DNS Security . . . . . . . . . . . . . . . . . . . . . . . . . .
For More Information . . . . . . . . . . . . . . . . . . . . . .
353
354
354
361
363
367
372
372
373
374
375
YaST
. .
. .
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
349
350
350
351
351
352
353
2 3 DHCP
23.1
23.2
23.3
23.4
302
305
314
315
332
345
376
387
388
391
393
393
398
398
2 5 Using NIS
25.1
25.2
401
409
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Terminology . . . . . . . .
Installing a Samba Server . .
Starting and Stopping Samba
Configuring a Samba Server .
Configuring Clients . . . . .
Samba as Login Server . . .
For More Information . . .
441
442
447
450
452
455
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2 9 Samba
29.1
29.2
29.3
29.4
29.5
29.6
29.7
410
411
414
419
426
428
429
435
439
441
401
407
455
459
463
466
466
469
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
469
471
471
471
477
478
479
481
Quick Start . . . . . . . . . . . . . . .
Configuring Apache . . . . . . . . . . .
Starting and Stopping Apache . . . . . . .
Installing, Activating, and Configuring Modules
Getting CGI Scripts to Work . . . . . . . .
Setting Up a Secure Web Server with SSL . .
Avoiding Security Problems . . . . . . . .
Troubleshooting . . . . . . . . . . . . .
For More Information . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
481
483
497
499
507
509
515
517
518
521
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
521
522
523
523
524
524
Part V Mobility
525
3 2 Power Management
527
32.1
32.2
32.3
32.4
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3 3 Wireless Communication
33.1
33.2
33.3
543
Wireless LAN . . . . . . . . . . . . . . . . . . . . . . . . . .
Bluetooth . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Infrared Data Transmission . . . . . . . . . . . . . . . . . . . .
527
528
536
537
543
553
560
565
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. .
. .
. .
. .
. .
. .
. .
566
567
567
568
568
569
571
34.8
573
Part VI Security
575
577
35.1
35.2
35.3
35.4
35.5
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
.
.
.
.
.
589
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
Kerberos Terminology .
How Kerberos Works . .
Users' View of Kerberos .
For More Information .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
597
602
613
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
.
.
.
.
589
590
590
591
591
592
593
594
597
3 8 Network AuthenticationKerberos
38.1
38.2
38.3
38.4
577
580
581
582
587
613
615
618
619
621
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
622
623
624
625
626
628
634
39.8
39.9
39.10
39.11
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
636
637
638
639
643
644
647
649
651
652
652
654
661
662
671
673
A An Example Network
675
B GNU Licenses
677
B.1
B.2
Index
677
680
685
1 Feedback
We want to hear your comments and suggestions about this manual and the other documentation included with this product. Please use the User Comments feature at the
bottom of each page of the online documentation and enter your comments there.
2 Additional Documentation
We provide HTML and PDF versions of our books in different languages. The following
manuals are available on this product:
Start-Up
Guides you through the installation and basic configuration of your system. For
newcomers, the manual also introduces basic Linux concepts such as the file system,
the user concept and access permissions and gives an overview of the features
openSUSE offers to support mobile computing. Provides help and advice in troubleshooting.
KDE Quick Start
Gives a short introduction to the KDE desktop and some key applications running
on it.
KDE User Guide
Introduces the KDE desktop of openSUSE and a variety of applications shipping
with it. It guides you through using these applications and helps you perform key
tasks. It is intended mainly for end users who want to make efficient use of KDE
in everyday life.
GNOME Quick Start
Gives a short introduction to the GNOME desktop and some key applications
running on it.
GNOME User Guide
Introduces the GNOME desktop of openSUSE and a variety of applications you
will encounter when working with the GNOME desktop. It guides you through
using these applications and helps you perform key tasks. It is intended mainly for
end users who want to make efficient use of applications running on the GNOME
desktop.
xiv
Reference
Reference
Gives you a general understanding of openSUSE and covers advanced system administration tasks. It is intended mainly for system administrators and home users
with basic system administration knowledge. It provides detailed information about
advanced deployment scenarios, administration of your system, the interaction of
key system components and the set-up of various network and file services openSUSE offers.
Novell AppArmor Quick Start
Helps you understand the main concepts behind Novell AppArmor.
Novell AppArmor Administration Guide
Contains in-depth information about the use of Novell AppArmor in your environment.
Lessons For Lizards
A community book project for the openSUSE distribution. A snapshot of the
manual written by the open source community is released on an equal footing with
the Novell/SUSE manuals. The lessons are written in a cook book style and cover
more specific or exotic topics than the traditional manuals. For more information,
see http://developer.novell.com/wiki/index.php/Lessons
_for_Lizards.
Find HTML versions of the openSUSE manuals in your installed system under /usr/
share/doc/manual or in the help centers of your KDE or GNOME desktop. You
can also access the documentation on the Web at http://www.novell.com/
documentation/opensuse103/ where you can download PDF or HTML versions
of the manuals. For information where to find the books on your installation media,
refer to the Release Notes of this product, available from your installed system under
/usr/share/doc/release-notes/.
3 Documentation Conventions
The following typographical conventions are used in this manual:
/etc/passwd: filenames and directory names
placeholder: replace placeholder with the actual value
xv
5 Source Code
The source code of openSUSE is publicly available. To download the source code,
proceed as outlined under http://www.novell.com/products/suselinux/
source_code.html. If requested we send you the source code on a DVD. We need
to charge a $15 or 15 fee for creation, handling and postage. To request a DVD of the
source code, send an e-mail to [email protected] [mailto:sourcedvd@suse
.de] or mail the request to:
SUSE Linux Products GmbH
Product Management openSUSE
Maxfeldstr. 5
D-90409 Nrnberg
Germany
xvi
Reference
6 Acknowledgments
With a lot of voluntary commitment, the developers of Linux cooperate on a global
scale to promote the development of Linux. We thank them for their effortsthis distribution would not exist without them. Furthermore, we thank Frank Zappa and Pawar.
Special thanks, of course, go to Linus Torvalds.
Have a lot of fun!
Your SUSE Team
xvii
Remote Installation
openSUSE can be installed in several different ways. As well as the usual media installation covered in Chapter 1, Installation with YaST (Start-Up), you can choose
from various network-based approaches or even take a completely hands-off approach
to the installation of openSUSE.
Each method is introduced by means of two short check lists: one listing the prerequisites
for this method and the other illustrating the basic procedure. More detail is then provided for all the techniques used in these installation scenarios.
NOTE
In the following sections, the system to hold your new openSUSE installation
is referred to as target system or installation target. The term installation source
is used for all sources of installation data. This includes physical media, such
as CD and DVD, and network servers distributing the installation data in your
network.
IMPORTANT
The configuration of the X Window System is not part of any remote installation
process. After the installation has finished, log in to the target system as root,
enter telinit 3, and start SaX2 to configure the graphics hardware as described in Section Setting Up Graphics Card and Monitor (Chapter 2, Setting
Up Hardware Components with YaST, Start-Up).
Reference
2 Boot the target system using the first CD or DVD of the openSUSE media kit.
3 When the boot screen of the target system appears, use the boot options prompt
to set the appropriate VNC options and the address of the installation source.
This is described in detail in Section 1.4, Booting the Target System for Installation (page 31).
The target system boots to a text-based environment, giving the network address
and display number under which the graphical installation environment can be
addressed by any VNC viewer application or browser. VNC installations announce
themselves over OpenSLP and can be found using Konqueror in service:/
or slp:/ mode.
4 On the controlling workstation, open a VNC viewing application or Web
browser and connect to the target system as described in Section 1.5.1, VNC
Installation (page 36).
5 Perform the installation as described in Chapter 1, Installation with YaST (StartUp). Reconnect to the target system after it reboots for the final part of the installation.
6 Finish the installation.
Remote Installation
Controlling system with working network connection and VNC viewer software
or Java-enabled browser (Firefox, Konqueror, Internet Explorer, or Opera)
Physical boot medium (CD, DVD, or custom boot disk) for booting the target system
Running DHCP server providing IP addresses
To perform this kind of installation, proceed as follows:
1 Set up the installation source as described in Section 1.2, Setting Up the Server
Holding the Installation Sources (page 12). Choose an NFS, HTTP, or FTP
network server. For an SMB installation source, refer to Section 1.2.5, Managing
an SMB Installation Source (page 20).
2 Boot the target system using the first CD or DVD of the openSUSE media kit.
3 When the boot screen of the target system appears, use the boot options prompt
to set the appropriate VNC options and the address of the installation source.
This is described in detail in Section 1.4, Booting the Target System for Installation (page 31).
The target system boots to a text-based environment, giving the network address
and display number under which the graphical installation environment can be
addressed by any VNC viewer application or browser. VNC installations announce
themselves over OpenSLP and can be found using Konqueror in service:/
or slp:/ mode.
4 On the controlling workstation, open a VNC viewing application or Web
browser and connect to the target system as described in Section 1.5.1, VNC
Installation (page 36).
5 Perform the installation as described in Chapter 1, Installation with YaST (StartUp). Reconnect to the target system after it reboots for the final part of the installation.
6 Finish the installation.
Reference
Remote Installation
5 Initiate the boot process of the target system using Wake on LAN. This is described in Section 1.3.7, Wake on LAN (page 31).
6 On the controlling workstation, open a VNC viewing application or Web
browser and connect to the target system as described in Section 1.5.1, VNC
Installation (page 36).
7 Perform the installation as described in Chapter 1, Installation with YaST (StartUp). Reconnect to the target system after it reboots for the final part of the installation.
8 Finish the installation.
Reference
Remote Installation
For this type of installation, make sure that the following requirements are met:
Remote installation source: NFS, HTTP, FTP, or SMB with working network
connection
Target system with working network connection
Controlling system with working network connection and working SSH client
software
Physical boot medium (CD or DVD) for booting the target system
Running DHCP server providing IP addresses
To perform this kind of installation, proceed as follows:
1 Set up the installation source as described in Section 1.2, Setting Up the Server
Holding the Installation Sources (page 12). Choose an NFS, HTTP, or FTP
network server. For an SMB installation source, refer to Section 1.2.5, Managing
an SMB Installation Source (page 20).
2 Boot the target system using the first CD or DVD of the openSUSE media kit.
3 When the boot screen of the target system appears, use the boot options prompt
to pass the appropriate parameters for network connection, location of the installation source, and SSH enablement. See Section 1.4.3, Using Custom Boot
Options (page 33) for detailed instructions on the use of these parameters.
The target system boots to a text-based environment, giving you the network
address under which the graphical installation environment can be addressed by
any SSH client.
4 On the controlling workstation, open a terminal window and connect to the target
system as described in Section Connecting to the Installation Program
(page 38).
5 Perform the installation as described in Chapter 1, Installation with YaST (StartUp). Reconnect to the target system after it reboots for the final part of the installation.
6 Finish the installation.
10
Reference
Remote Installation
11
6 On the controlling workstation, start an SSH client and connect to the target
system as described in Section 1.5.2, SSH Installation (page 38).
7 Perform the installation as described in Chapter 1, Installation with YaST (StartUp). Reconnect to the target system after it reboots for the final part of the installation.
8 Finish the installation.
12
Reference
4 Select the server type (HTTP, FTP, or NFS). The selected server service is
started automatically every time the system starts. If a service of the selected
type is already running on your system and you want to configure it manually
for the server, deactivate the automatic configuration of the server service with
Do Not Configure Any Network Services. In both cases, define the directory in
which the installation data should be made available on the server.
5 Configure the required server type. This step relates to the automatic configuration
of server services. It is skipped when automatic configuration is deactivated.
Define an alias for the root directory of the FTP or HTTP server on which the
installation data should be found. The installation source will later be located
under ftp://Server-IP/Alias/Name (FTP) or under
http://Server-IP/Alias/Name (HTTP). Name stands for the name of
the installation source, which is defined in the following step. If you selected
NFS in the previous step, define wild cards and export options. The NFS server
will be accessible under nfs://Server-IP/Name. Details of NFS and exports
can be found in Chapter 28, Sharing File Systems with NFS (page 455).
TIP: Firewall Settings
Make sure that the firewall settings of your server system allow traffic
on the ports for HTTP, NFS, and FTP. If they currently do not, start the
YaST firewall module and open the respective ports.
6 Configure the installation source. Before the installation media are copied to their
destination, define the name of the installation source (ideally, an easily remembered abbreviation of the product and version). YaST allows providing ISO images of the media instead of copies of the installation CDs. If you want this, activate the relevant check box and specify the directory path under which the ISO
files can be found locally. Depending on the product to distribute using this installation server, it might be that more add-on CDs or service pack CDs are required and should be added as extra installation sources. To announce your installation server in the network via OpenSLP, activate the appropriate option.
Remote Installation
13
TIP
Consider announcing your installation source via OpenSLP if your network
setup supports this option. This saves you from entering the network installation path on every target machine. The target systems are just
booted using the SLP boot option and find the network installation source
without any further configuration. For details on this option, refer to
Section 1.4, Booting the Target System for Installation (page 31).
7 Upload the installation data. The most lengthy step in configuring an installation
server is copying the actual installation CDs. Insert the media in the sequence
requested by YaST and wait for the copying procedure to end. When the sources
have been fully copied, return to the overview of existing information sources
and close the configuration by selecting Finish.
Your installation server is now fully configured and ready for service. It is automatically started every time the system is started. No further intervention is required. You only need to configure and start this service correctly by hand if you
have deactivated the automatic configuration of the selected network service
with YaST as an initial step.
To deactivate an installation source, select the installation source to remove then select
Delete. The installation data are removed from the system. To deactivate the network
service, use the respective YaST module.
If your installation server should provide the installation data for more than one product
of product version, start the YaST installation server module and select Add in the
overview of existing installation sources to configure the new installation source.
14
Reference
Remote Installation
15
5 Select Add Host and enter the hostnames of the machines to which to export the
installation data. Instead of specifying hostnames here, you could also use wild
cards, ranges of network addresses, or just the domain name of your network.
Enter the appropriate export options or leave the default, which works fine in
most setups. For more information about the syntax used in exporting NFS shares,
read the exports man page.
6 Click Finish. The NFS server holding the openSUSE installation sources is automatically started and integrated into the boot process.
If you prefer manually exporting the installation sources via NFS instead of using the
YaST NFS Server module, proceed as follows:
1 Log in as root.
2 Open the file /etc/exports and enter the following line:
/productversion *(ro,root_squash,sync)
This exports the directory /productversion to any host that is part of this
network or to any host that can connect to this server. To limit the access to this
server, use netmasks or domain names instead of the general wild card *. Refer
to the export man page for details. Save and exit this configuration file.
3 To add the NFS service to the list of servers started during system boot, execute
the following commands:
insserv /etc/init.d/nfsserver
insserv /etc/init.d/portmap
4 Start the NFS server with rcnfsserver start. If you need to change the
configuration of your NFS server later, modify the configuration file and restart
the NFS daemon with rcnfsserver restart.
Announcing the NFS server via OpenSLP makes its address known to all clients in
your network.
1 Log in as root.
2 Enter the directory /etc/slp.reg.d/.
16
Reference
2c Create a subdirectory holding the installation sources in the FTP root directory:
mkdir instsource
Remote Installation
17
2d Mount the contents of the installation repository into the change root environment of the FTP server:
mount --bind path_to_instsource /srv/ftp/instsource
Replace instsource with the actual name to the installation source directory on your server. The service: line should be entered as one continuous
line.
3b Save this configuration file and start the OpenSLP daemon with rcslpd
start.
18
Reference
with
Options Indexes FollowSymLinks
Remote Installation
19
3 Announce the installation source via OpenSLP, if this is supported by your network setup:
3a Create a configuration file called install.suse.http.reg under
/etc/slp/reg.d/ that contains the following lines:
# Register the HTTP Installation Server
service:install.suse:http://$HOSTNAME/srv/www/htdocs/instsource/CD1/,en,65535
description=HTTP Installation Source
20
Reference
Replace path_to_iso with the path to your local copy of the ISO image,
path_to_instsource with the source directory of your server, product
Remote Installation
21
with the product name, and mediumx with the type (CD or DVD) and number
of media you are using.
6 Repeat the previous step to mount all ISO images needed for your product.
7 Start your installation server as usual, as described in Section 1.2.2, Setting Up
an NFS Installation Source Manually (page 14), Section 1.2.3, Setting Up an
FTP Installation Source Manually (page 17), or Section 1.2.4, Setting Up an
HTTP Installation Source Manually (page 19).
22
Reference
#
# "next server" defines the tftp server that will be used
next server ip_tftp_server:
#
# "filename" specifies the pxelinux image on the tftp server
# the server runs in chroot under /srv/tftpboot
filename "pxelinux.0";
}
The host statement introduces the hostname of the installation target. To bind the
hostname and IP address to a specific host, you must know and specify the system's
hardware (MAC) address. Replace all the variables used in this example with the actual
values that match your environment.
After restarting the DHCP server, it provides a static IP to the host specified, enabling
you to connect to the system via SSH.
Remote Installation
23
Reference
4a If it does not exist, create a file called tftp under this directory with touch
tftp. Then run chmod 755 tftp.
4b Open the file tftp and add the following lines:
service tftp
{
socket_type
protocol
wait
user
server
server_args
disable
}
=
=
=
=
=
=
=
dgram
udp
yes
root
/usr/sbin/in.tftpd
-s /srv/tftpboot
no
2 Install the syslinux package directly from your installation CDs or DVDs
with YaST.
3 Copy the /usr/share/syslinux/pxelinux.0 file to the /srv/
tftpboot directory by entering the following:
cp -a /usr/share/syslinux/pxelinux.0 /srv/tftpboot
Remote Installation
25
4 Change to the directory of your installation repository and copy the isolinux
.cfg file to /srv/tftpboot/pxelinux.cfg/default by entering the
following:
cp -a boot/loader/isolinux.cfg /srv/tftpboot/pxelinux.cfg/default
26
Reference
0
message
1
100
Remote Installation
27
28
Reference
Labels are mangled as if they were filenames and they must be unique after mangling. For example, the two labels v2.1.30 and v2.1.31 would not be distinguishable under PXELINUX because both mangle to the same DOS filename.
The kernel does not have to be a Linux kernel; it can be a boot sector or a COMBOOT file.
APPEND Append nothing. APPEND with a single hyphen as argument in a LABEL section
can be used to override a global APPEND.
LOCALBOOT type
On PXELINUX, specifying LOCALBOOT 0 instead of a KERNEL option means
invoking this particular label and causes a local disk boot instead of a kernel boot.
Argument
Description
Perform a local boot with the Universal Network Driver Interface (UNDI) driver still resident in memory
Perform a local boot with the entire PXE stack, including the
UNDI driver, still resident in memory
All other values are undefined. If you do not know what the UNDI or PXE stacks
are, specify 0.
TIMEOUT time-out
Indicates how long to wait at the boot prompt until booting automatically, in units
of 1/10 second. The time-out is canceled as soon as the user types anything on the
keyboard, assuming the user will complete the command begun. A time-out of zero
Remote Installation
29
disables the time-out completely (this is also the default). The maximum possible
time-out value is 35996 (just less than one hour).
PROMPT flag_val
If flag_val is 0, displays the boot prompt only if Shift or Alt is pressed or Caps
Lock or Scroll Lock is set (this is the default). If flag_val is 1, always displays
the boot prompt.
F2 filename
F1 filename
..etc...
F9 filename
F10 filename
Displays the indicated file on the screen when a function key is pressed at the boot
prompt. This can be used to implement preboot online help (presumably for the
kernel command line options). For backward compatibility with earlier releases,
F10 can be also entered as F0. Note that there is currently no way to bind filenames
to F11 and F12.
30
Reference
mac_of_target
Remote Installation
31
Key
Purpose
Available Options
F1
Provide help
None
F2
F3
Text mode
VESA
resolution #1
resolution #2
...
32
Reference
Key
Purpose
F4
Available Options
CD-ROM or DVD
SLP
FTP
HTTP
NFS
SMB
Hard Disk
F5
Default
No ACPI
No local APIC
InstallationACPI Disabled
InstallationSafe Settings
F6
Driver
Remote Installation
33
them in the order they appear in this table to get one boot option string that is handed
to the installation routines. For example (all in one line):
install=... netdevice=... hostip=...netmask=... vnc=... vncpassword=...
Replace all the values (...) in this string with the values appropriate for your setup.
Table 1.2
34
Installation Scenario
Chapter 1, Installation
with YaST (Start-Up)
install=(nfs,http,
ftp,smb)://path
_to_instmedia
netdevice=some
_netdevice (only
needed if several network
devices are available)
hostip=some_ip
netmask=some
_netmask
gateway=ip
_gateway
vnc=1
vncpassword=some
_password
install=(nfs,http,
ftp,smb)://path
_to_instmedia
vnc=1
vncpassword=some
_password
Reference
Installation Scenario
install=(nfs,http,
ftp,smb)://path
_to_instmedia
netdevice=some
_netdevice (only
needed if several network
devices are available)
hostip=some_ip
netmask=some
_netmask
gateway=ip
_gateway
usessh=1
sshpassword=some
_password
install=(nfs,http,
ftp,smb)://path
_to_instmedia
usessh=1
sshpassword=some
_password
Remote Installation
35
Installation Scenario
Wake on LAN
(page 11)
36
Reference
The installation program announces the IP address and display number needed to connect
for installation. If you have physical access to the target system, this information is
provided right after the system booted for installation. Enter this data when your VNC
client software prompts for it and provide your VNC password.
Because the installation target announces itself via OpenSLP, you can retrieve the address
information of the installation target via an SLP browser without the need for any
physical contact to the installation itself provided your network setup and all machines
support OpenSLP:
1 Start the KDE file and Web browser Konqueror.
2 Enter service://yast.installation.suse in the location bar. The
target system then appears as an icon in the Konqueror screen. Clicking this icon
launches the KDE VNC viewer in which to perform the installation. Alternatively,
run your VNC viewer software with the IP address provided and add :1 at the
end of the IP address for the display the installation is running on.
Remote Installation
37
3 Enter your VNC password when prompted to do so. The browser window now
displays the YaST screens as in a normal local installation.
Reference
ssh -X root@ip_address_of_target
Remote Installation
39
41
All existing or suggested partitions on all connected hard disks are displayed in the list
of the YaST Expert Partitioner dialog. Entire hard disks are listed as devices without
numbers, such as /dev/sda. Partitions are listed as parts of these devices, such as
/dev/sda1. The size, type, file system, and mount point of the hard disks and their
partitions are also displayed. The mount point describes where the partition appears in
the Linux file system tree.
If you run the expert dialog during installation, any free hard disk space is also listed
and automatically selected. To provide more disk space to openSUSE, free the needed
space starting from the bottom toward the top of the list (starting from the last partition
of a hard disk toward the first). For example, if you have three partitions, you cannot
use the second exclusively for openSUSE and retain the third and first for other operating
systems.
42
Reference
43
44
Reference
45
The partitions, regardless of whether they are Linux or FAT partitions, are specified
with the options noauto and user. This allows any user to mount or unmount these
partitions as needed. For security reasons, YaST does not automatically enter the exec
option here, which is needed for executing programs from the location. However, to
run programs from there, you can enter this option manually. This measure is necessary
if you encounter system messages such as bad interpreter or Permission denied.
46
Reference
Using swap
Swap is used to extend the physically available memory. This way it is possible to use
more memory than physical ram available. The memory management system of kernels
before 2.4.10 needed swap as a safety measure. In those times, if you did not have twice
the size of your ram in swap, the performance of the system suffered. This does not
hold true anymore as these limitations no longer exist.
When the kernel runs out of memory, it swaps out pages of memory that are not used
frequently. Therefore, the running applications have more memory available and even
their caching works more smoothly.
If an application tries to allocate as much memory as it can possibly get, there are some
problems with swap. There are three major cases to look at:
System with no swap
The application gets all memory that can be freed by any means. All caches are
freed, and thus all other applications are slowed down. After several minutes, the
out of memory killer mechanism of the kernel will become active and kill the process.
System with medum sized swap (128 MB256 MB)
At first, the system is slowed down like a system without swap. After all physical
ram has been used up, swap space is used as well. At this point, the system becomes
very slow and it becomes impossible to run commands from remote. Depending
on the speed of the hard disks that run the swap space, the system stays in this
condition for about 10 to 15 minutes until the out of memory killer of the kernel
resolves the issue.
System with lots of swap (several GB)
You better do not have an application that is running wild and swapping frantically,
in this case. If you do have this problem, the system will need many hours to recover.
In the process, it is likely that other processes get timeouts and faults, leaving the
system in an undefined state, even if the faulty process is killed. In this case, you
better just reboot the machine hard and try to get it running again. Lots of swap is
only useful if you have an application that relies on this feature. Such applications
(like databases or graphics manipulation programs) have an option to directly use
hard disk space for their needs. It is advisable to use this option instead of using
lots of swap space.
47
If your system does not run wild, but needs more swap after some time, it is possible
to extend the swap space online. If you prepared a partition for swap space, just add
this partition with YaST. If you do not have a partition available, you may also just use
a swap file to extend the swap. Swap files are generally slower than partitions, but
compared to physical ram, both are extremely slow and the actual speed difference is
not as important as one would think in the first place.
Procedure 2.1 Adding a Swap File Manually
To add a swap file in the running system, proceed as follows:
1 Create an empty file in your system. For example, if you want to add a swap file
with 128 MB swap at /var/lib/swap/swapfile, use the commands:
mkdir -p /var/lib/swap
dd if=/dev/zero of=/var/lib/swap/swapfile bs=1M count=128
Note, that at this point this is only temporary swap space. After the next reboot,
it is not used anymore.
5 To enable this swap file permanently, add the following line to /etc/fstab:
/var/lib/swap/swapfile swap swap defaults 0 0
48
Reference
49
DISK
PART
PART
DISK 1
PART
PART
PART
DISK 2
PART
PART
VG 1
MP
MP
MP
PART
VG 2
LV 1
LV 2
LV 3
LV 4
MP
MP
MP
MP
Figure 2.2, Physical Partitioning versus LVM (page 50) compares physical partitioning
(left) with LVM segmentation (right). On the left side, one single disk has been divided
into three physical partitions (PART), each with a mount point (MP) assigned so that
the operating system can access them. On the right side, two disks have been divided
into two and three physical partitions each. Two LVM volume groups (VG 1 and VG 2)
have been defined. VG 1 contains two partitions from DISK 1 and one from DISK 2.
VG 2 contains the remaining two partitions from DISK 2. In LVM, the physical disk
partitions that are incorporated in a volume group are called physical volumes (PVs).
Within the volume groups, four logical volumes (LV 1 through LV 4) have been defined,
which can be used by the operating system via the associated mount points. The border
50
Reference
between different logical volumes need not be aligned with any partition border. See
the border between LV 1 and LV 2 in this example.
LVM features:
Several hard disks or partitions can be combined in a large logical volume.
Provided the configuration is suitable, an LV (such as /usr) can be enlarged when
the free space is exhausted.
Using LVM, it is possible to add hard disks or LVs in a running system. However,
this requires hot-swappable hardware that is capable of such actions.
It is possible to activate a "striping mode" that distributes the data stream of a logical
volume over several physical volumes. If these physical volumes reside on different
disks, this can improve the reading and writing performance just like RAID 0.
The snapshot feature enables consistent backups (especially for servers) in the
running system.
With these features, using LVM already makes sense for heavily used home PCs or
small servers. If you have a growing data stock, as in the case of databases, music
archives, or user directories, LVM is just the right thing for you. This would allow file
systems that are larger than the physical hard disk. Another advantage of LVM is that
up to 256 LVs can be added. However, keep in mind that working with LVM is different
from working with conventional partitions. Instructions and further information about
configuring LVM is available in the official LVM HOWTO at http://tldp.org/
HOWTO/LVM-HOWTO/.
Starting from kernel version 2.6, LVM version 2 is available, which is downwardcompatible with the previous LVM and enables the continued management of old volume
groups. When creating new volume groups, decide whether to use the new format or
the downward-compatible version. LVM 2 does not require any kernel patches. It makes
use of the device mapper integrated in kernel 2.6. This kernel only supports LVM version 2. Therefore, when talking about LVM, this section always refers to LVM version 2.
51
you to edit and delete existing partitions and create new ones that should be used with
LVM. There, create an LVM partition by first clicking Create > Do not format then
selecting 0x8E Linux LVM as the partition identifier. After creating all the partitions to
use with LVM, click LVM to start the LVM configuration.
52
Reference
If there are several volume groups, set the current volume group in the selection box
to the upper left. The buttons in the upper right enable creation of additional volume
groups and deletion of existing volume groups. Only volume groups that do not have
any partitions assigned can be deleted. All partitions that are assigned to a volume group
are also referred to as a physical volumes (PV).
Figure 2.4 Physical Volume Setup
To add a previously unassigned partition to the selected volume group, first click the
partition then Add Volume. At this point, the name of the volume group is entered next
to the selected partition. Assign all partitions reserved for LVM to a volume group.
Otherwise, the space on the partition remains unused. Before exiting the dialog, every
volume group must be assigned at least one physical volume. After assigning all physical volumes, click Next to proceed to the configuration of logical volumes.
53
existing logical volumes are listed here. Add, Edit, and Remove logical volumes as
needed until all space in the volume group has been exhausted. Assign at least one
logical volume to each volume group.
Figure 2.5 Logical Volume Management
To create a new logical volume, click Add and fill out the pop-up that opens. As for
partitioning, enter the size, file system, and mount point. Normally, a file system, such
as reiserfs or ext2, is created on a logical volume and is then designated a mount point.
The files stored on this logical volume can be found at this mount point on the installed
system. Additionally it is possible to distribute the data stream in the logical volume
among several physical volumes (striping). If these physical volumes reside on different
hard disks, this generally results in a better reading and writing performance (like
RAID 0). However, a striping LV with n stripes can only be created correctly if the
hard disk space required by the LV can be distributed evenly to n physical volumes.
If, for example, only two physical volumes are available, a logical volume with three
stripes is impossible.
WARNING: Striping
YaST has no chance at this point to verify the correctness of your entries concerning striping. Any mistake made here is apparent only later when the LVM
is implemented on disk.
54
Reference
If you have already configured LVM on your system, the existing logical volumes can
be entered now. Before continuing, assign appropriate mount points to these logical
volumes too. With Next, return to the YaST Expert Partitioner and finish your work
there.
55
rity, or both. Most RAID controllers use the SCSI protocol because it can address a
larger number of hard disks in a more effective way than the IDE protocol and is more
suitable for parallel processing of commands. There are some RAID controllers that
support IDE or SATA hard disks. Soft RAID provides the advantages of RAID systems
without the additional cost of hardware RAID controllers. However, this requires some
CPU time and has memory requirements that make it unsuitable for real high performance computers.
56
Reference
57
be used with soft RAID. There, create RAID partitions by first clicking Create > Do
not format then selecting 0xFD Linux RAID as the partition identifier. For RAID 0 and
RAID 1, at least two partitions are neededfor RAID 1, usually exactly two and no
more. If RAID 5 is used, at least three partitions are required. It is recommended to
take only partitions of the same size. The RAID partitions should be stored on different
hard disks to decrease the risk of losing data if one is defective (RAID 1 and 5) and to
optimize the performance of RAID 0. After creating all the partitions to use with RAID,
click RAID > Create RAID to start the RAID configuration.
TIP
Starting with openSUSE 10.2, the system detects the settings of pseudo RAID
adapters found on many mainboards. These are used to setup the software
RAID without additional interaction.
In the next dialog, choose between RAID levels 0, 1, and 5 (see Section 2.3.1, RAID
Levels (page 56) for details). After Next is clicked, the following dialog lists all partitions with either the Linux RAID or Linux native type (see Figure 2.7, RAID
Partitions (page 58)). No swap or DOS partitions are shown. If a partition is already
assigned to a RAID volume, the name of the RAID device (for example, /dev/md0)
is shown in the list. Unassigned partitions are indicated with --.
Figure 2.7 RAID Partitions
58
Reference
To add a previously unassigned partition to the selected RAID volume, first click the
partition then Add. At this point, the name of the RAID device is entered next to the
selected partition. Assign all partitions reserved for RAID. Otherwise, the space on the
partition remains unused. After assigning all partitions, click Next to proceed to the
settings dialog where you can fine-tune the performance (see Figure 2.8, File System
Settings (page 59)).
Figure 2.8 File System Settings
As with conventional partitioning, set the file system to use as well as encryption and
the mount point for the RAID volume. Checking Persistent Superblock ensures that
the RAID partitions are recognized as such when booting. After completing the configuration with Finish, see the /dev/md0 device and others indicated with RAID in the
expert partitioner.
2.3.3 Troubleshooting
Check the file /proc/mdstats to find out whether a RAID partition has been destroyed. In the event of a system failure, shut down your Linux system and replace the
defective hard disk with a new one partitioned the same way. Then restart your system
and enter the command mdadm /dev/mdX --add /dev/sdX. Replace 'X' with
59
your particular device identifiers. This integrates the hard disk automatically into the
RAID system and fully reconstructs it.
60
Reference
Online Update
openSUSE offers a continuous stream of software security updates for your product.
By default openSUSE Updater is used to keep your system up-to-date. Refer to Section Keeping the System Up-to-date (Chapter 3, Installing or Removing Software,
Start-Up) for further information on openSUSE Updater. This chapter covers alternative
graphical tools and command line utilities for updating software packages.
The current patches for openSUSE are available from an update software repository.
If you have registered your product during the installation, an update repository is already
configured. If you have not registered openSUSE, you can do so by running Software
> Online Update Configuration in YaST. Alternatively, you can manually add an update
repository from a source you trust with each update tool. Please refer to the respective
application described below for instructions.
openSUSE provides updates with different relevance levels. Security updates fix
severe security hazards and should definitely be installed. Recommended updates fix
issues that could compromise your computer, whereas Optional updates fix nonsecurity relevant issues or provide enhancements.
Online Update
63
TIP
YaST Online Update has been integrated into the YaST software management
module. This ensures that the newest version of a package is always installed.
It is no longer necessary to run an online update after installing new packages.
The patch display lists patches available for openSUSE. The patches are sorted by relevance, security-wise. The color of the patch name as well as a pop-up window under
the mouse cursor indicate the security status of the patch: Security (red),
Recommended (blue), or Optional (black). There are three different views on
patches. Use Show Patch Category to toggle the views:
64
Reference
Online Update
65
66
Reference
In this case run the command zypper sd 2 to remove this installation repository
from the list.
To add an installation repository, you can use the command zypper sa
installation_repository. Information about additional installation sources
is provided at http://en.opensuse.org/Installation_Sources.
Using the zypper shell is usually faster, because all the relevant data stays in memory.
Zypper supports the readline library. This means that you can use all the command line
editing functions in the zypper shell that are also available in the Bash shell. Zypper
maintains its command history in file ~/.zypper_history.
Online Update
67
68
Reference
When YaST is started in text mode, the YaST Control Center appears first. See Figure 4.1 . The main window consists of three areas. The left frame, which is surrounded
by a thick white border, features the categories to which the various modules belong.
The active category is indicated by a colored background. The right frame, which is
69
70
Reference
confirm with Enter. If you navigate to an item with Tab, press Enter to execute the
selected action or activate the respective menu item.
Function Keys
The F keys (F1 to F12) enable quick access to the various buttons. Which function
keys are actually mapped to which buttons depends on the active YaST module,
because the different modules offer different buttons (Details, Info, Add, Delete,
etc.). Use F10 for OK, Next, and Finish. Press F1 to access the YaST help, which
shows the functions mapped to the individual F keys.
Figure 4.2 The Software Installation Module
71
View a list of all module names available on your system with yast -l or yast
--list. Start the network module, for example, with yast lan.
or
yast --install <package_name>
72
Reference
package_name can be a single short package name, for example gvim which is installed with dependency checking or the full path to an rpm package, which is installed
without dependency checking.
If you need a command-line based software management utility with functionality beyond what YaST provides, consider using zypper. This new utility uses the same software
management library that is also the foundation for the YaST package manager. The
basic usage of zypper is covered in Section 3.2, Update from the Command Line with
zypper (page 66).
If a module does not provide command line support, the module is started in text mode
and the following message appears:
This YaST2 module does not support the command line interface.
73
You can update an existing system without completely reinstalling it. There are two
types of updates: updating individual software packages and updating the entire system.
5.1.1 Preparations
Before updating, copy the old configuration files to a separate medium, such as
streamer, removable hard disk, USB stick, or ZIP drive, to secure the data. This primarily applies to files stored in /etc as well as some of the directories and files in /var
and /opt. You may also want to write the user data in /home (the HOME directories)
to a backup medium. Back up this data as root. Only root has read permission for
all local files.
Before starting your update, make note of the root partition. The command df / lists
the device name of the root partition. In Example 5.1, List with df -h (page 76),
the root partition to write down is /dev/sda3 (mounted as /).
75
Size
74G
252M
116G
39G
4.6G
PostgreSQL
Before updating PostgreSQL (postgres), dump the databases. See the manual page
of pg_dump. This is only necessary if you actually used PostgreSQL prior to your
update.
76
Reference
77
Select components from the YaST package selection list according to your needs. If
you select a package essential for the overall operation of the system, YaST issues a
warning. Such packages should be updated only in the update mode. For example, many
packages contain shared libraries. If you update these programs and applications in
the running system, things might malfunction.
78
Reference
PCMCIA
cardmgr no longer manages PC cards. Instead, as with Cardbus cards and other subsystems, a kernel module manages them. All necessary actions are executed by
hotplug. The pcmcia start script has been removed and cardctl is replaced by
pccardctl. For more information, see /usr/share/doc/packages/
pcmciautils/README.SUSE.
79
/usr/sbin/rcntp
/etc/sysconfig/ntp
80
Reference
Apache 2.2
For Apache version 2.2, Chapter 30, The Apache HTTP Server (page 481) was completely
reworked. In addition, find generic upgrade information at http://httpd.apache
.org/docs/2.2/upgrading.html and the description of new features at
http://httpd.apache.org/docs/2.2/new_features_2_2.html.
81
82
Reference
ulimit Settings
The ulimit settings can be configured in /etc/sysconfig/ulimit. By default,
only two limits are changed from the kernel defaults:
SOFTVIRTUALLIMIT=80 limits a single process so that it does not allocate more
than 80% of the available virtual memory (RAM and swap).
SOFTRESIDENTLIMIT=85 limits a single process so that it does not occupy
more than 85% of the physical memory (RAM).
These soft limits can be overridden with the ulimit command by the user. Hard limits
could only be overridden by root.
The values have been chosen conservatively to avoid breaking large processes that have
worked before. If there are no legitimate processes with huge memory consumption,
set the limits lower to provide more effective protection against run-away processes.
The limits are per process and thus not an effective protection against malicious users.
The limits are meant to protect against accidental excessive memory usage.
To configure different limits depending on the user, use the pam_limits functionality
and configure /etc/security/limits.conf. The ulimit package is not required
for that, but both mechanisms can be used in parallel. The limits configured in limits
.conf override the global defaults from the ulimit package.
83
84
Reference
RPM (RPM Package Manager) is used for managing software packages. Its main
commands are rpm and rpmbuild. The powerful RPM database can be queried by
the users, system administrators, and package builders for detailed information about
the installed software.
Essentially, rpm has five modes: installing, uninstalling, or updating software packages;
rebuilding the RPM database; querying RPM bases or individual RPM archives; integrity
checking of packages; and signing packages. rpmbuild can be used to build installable
packages from pristine sources.
Installable RPM archives are packed in a special binary format. These archives consist
of the program files to install and certain meta information used during the installation
by rpm to configure the software package or stored in the RPM database for documentation purposes. RPM archives normally have the extension .rpm.
TIP: Software Development Packages
For a number of packages, the components needed for software development
(libraries, headers, include files, etc.) have been put into separate packages.
These development packages are only needed if you want to compile software
yourself, for example, the most recent GNOME packages. They can be identified
by the name extension -devel, such as the packages alsa-devel,
gimp-devel, and kdelibs3-devel.
85
86
Reference
If a configuration file was changed by the system administrator before the update,
rpm saves the changed file with the extension .rpmorig or .rpmsave (backup
file) and installs the version from the new package, but only if the originally installed
file and the newer version are different. If this is the case, compare the backup file
(.rpmorig or .rpmsave) with the newly installed file and make your changes
again in the new file. Afterwards, be sure to delete all .rpmorig and .rpmsave
files to avoid problems with future updates.
.rpmnew files appear if the configuration file already exists and if the noreplace
label was specified in the .spec file.
Following an update, .rpmsave and .rpmnew files should be removed after comparing them, so they do not obstruct future updates. The .rpmorig extension is assigned
if the file has not previously been recognized by the RPM database.
Otherwise, .rpmsave is used. In other words, .rpmorig results from updating from
a foreign format to RPM. .rpmsave results from updating from an older RPM to a
newer RPM. .rpmnew does not disclose any information as to whether the system
administrator has made any changes to the configuration file. A list of these files is
available in /var/adm/rpmconfigcheck. Some configuration files (like /etc/
httpd/httpd.conf) are not overwritten to allow continued operation.
The -U switch is not just an equivalent to uninstalling with the -e option and installing
with the -i option. Use -U whenever possible.
To remove a package, enter rpm -e package. rpm only deletes the package if there
are no unresolved dependencies. It is theoretically impossible to delete Tcl/Tk, for example, as long as another application requires it. Even in this case, RPM calls for assistance from the database. If such a deletion isfor whatever reason and under unusual
circumstancesimpossible, even if no additional dependencies exist, it may be helpful
to rebuild the RPM database using the option --rebuilddb.
87
result in large amounts of data. However the SUSE RPM offers a feature enabling the
installation of patches in packages.
The most important considerations are demonstrated using pine as an example:
Is the patch RPM suitable for my system?
To check this, first query the installed version of the package. For pine, this can be
done with
rpm -q pine
pine-4.44-188
Then check if the patch RPM is suitable for this version of pine:
rpm -qp --basedon pine-4.44-224.i586.patch.rpm
pine = 4.44-188
pine = 4.44-195
pine = 4.44-207
This patch is suitable for three different versions of pine. The installed version in
the example is also listed, so the patch can be installed.
Which files are replaced by the patch?
The files affected by a patch can easily be seen in the patch RPM. The rpm parameter -P allows selection of special patch features. Display the list of files with the
following command:
rpm -qpPl pine-4.44-224.i586.patch.rpm
/etc/pine.conf
/etc/pine.conf.fixed
/usr/bin/pine
88
Reference
Which patches are already installed in the system and for which package versions?
A list of all patches installed in the system can be displayed with the command
rpm -qPa. If only one patch is installed in a new system (as in this example), the
list appears as follows:
rpm -qPa
pine-4.44-224
If, at a later date, you want to know which package version was originally installed,
this information is also available in the RPM database. For pine, this information
can be displayed with the following command:
rpm -q --basedon pine
pine = 4.44-188
More information, including information about the patch feature of RPM, is available
in the man pages of rpm and rpmbuild.
Finally, remove the temporary working files old.cpio, new.cpio, and delta.
Using applydeltarpm, you can reconstruct the new RPM from the file system if
the old package is already installed:
89
To derive it from the old RPM without accessing the file system, use the -r option:
applydeltarpm -r old.rpm new.delta.rpm new.rpm
90
-i
Package information
-l
File list
-f FILE
Query the package that contains the file FILE (the full
path must be specified with FILE)
-s
-d
-c
--dump
File list with complete details (to be used with -l, -c, or
-d)
--provides
List features of the package that another package can request with --requires
--requires, -R
--scripts
Reference
For example, the command rpm -q -i wget displays the information shown in
Example 6.1, rpm -q -i wget (page 91).
Example 6.1 rpm -q -i wget
Name
: wget
Relocations: (not relocatable)
Version
: 1.9.1
Vendor: SUSE LINUX AG,
Nuernberg, Germany
Release
: 50
Build Date: Sat 02 Oct 2004
03:49:13 AM CEST
Install date: Mon 11 Oct 2004 10:24:56 AM CEST
Build Host: f53.suse.de
Group
: Productivity/Networking/Web/Utilities
Source RPM:
wget-1.9.1-50.src.rpm
Size
: 1637514
License: GPL
Signature
: DSA/SHA1, Sat 02 Oct 2004 03:59:56 AM CEST, Key ID
a84edae89c800aca
Packager
: http://www.suse.de/feedback
URL
: http://wget.sunsite.dk/
Summary
: A tool for mirroring FTP and HTTP servers
Description :
Wget enables you to retrieve WWW documents or FTP files from a server.
This can be done in script files or via the command line.
[...]
The option -f only works if you specify the complete filename with its full path. Provide
as many filenames as desired. For example, the following command
rpm -q -f /bin/rpm /usr/bin/wget
results in:
rpm-4.1.1-191
wget-1.9.1-50
If only part of the filename is known, use a shell script as shown in Example 6.2, Script
to Search for Packages (page 91). Pass the partial filename to the script shown as a
parameter when running it.
Example 6.2 Script to Search for Packages
#! /bin/sh
for i in $(rpm -q -a -l | grep $1); do
echo "\"$i\" is in package:"
rpm -q -f $i
echo ""
done
91
The command rpm -q --changelog rpm displays a detailed list of change information about a specific package, sorted by date. This example shows information about
the package rpm.
With the help of the installed RPM database, verification checks can be made. Initiate
these with -V, -y, or --verify. With this option, rpm shows all files in a package
that have been changed since installation. rpm uses eight character symbols to give
some hints about the following changes:
Table 6.2
File size
Symbolic link
Modification time
Owner
Group
In the case of configuration files, the letter c is printed. For example, for changes to
/etc/wgetrc (wget):
rpm -V wget
S.5....T c /etc/wgetrc
The files of the RPM database are placed in /var/lib/rpm. If the partition /usr
has a size of 1 GB, this database can occupy nearly 30 MB, especially after a complete
update. If the database is much larger than expected, it is useful to rebuild the database
with the option --rebuilddb. Before doing this, make a backup of the old database.
The cron script cron.daily makes daily copies of the database (packed with gzip)
and stores them in /var/adm/backup/rpmdb. The number of copies is controlled
92
Reference
93
When you install a source package with YaST, all the necessary components are installed
in /usr/src/packages: the sources and the adjustments in SOURCES and the
relevant .spec file in SPECS.
WARNING
Do not experiment with system components (glibc, rpm, sysvinit, etc.),
because this endangers the operability of your system.
The following example uses the wget.src.rpm package. After installing the package
with YaST, you should have files similar to the following listing:
/usr/src/packages/SOURCES/nops_doc.diff
/usr/src/packages/SOURCES/toplev_destdir.diff
/usr/src/packages/SOURCES/wget-1.9.1+ipvmisc.patch
/usr/src/packages/SOURCES/wget-1.9.1-brokentime.patch
/usr/src/packages/SOURCES/wget-1.9.1-passive_ftp.diff
/usr/src/packages/SOURCES/wget-LFS-20040909.tar.bz2
/usr/src/packages/SOURCES/wget-wrong_charset.patch
/usr/src/packages/SPECS/wget.spec
rpmbuild -b X /usr/src/packages/SPECS/wget.spec starts the compilation. X is a wild card for various stages of the build process (see the output of
--help or the RPM documentation for details). The following is merely a brief explanation:
-bp
Prepare sources in /usr/src/packages/BUILD: unpack and patch.
-bc
Do the same as -bp, but with additional compilation.
-bi
Do the same as -bp, but with additional installation of the built software. Caution:
if the package does not support the BuildRoot feature, you might overwrite configuration files.
-bb
Do the same as -bi, but with the additional creation of the binary package. If the
compile was successful, the binary should be in /usr/src/packages/RPMS.
94
Reference
-ba
Do the same as -bb, but with the additional creation of the source RPM. If the
compilation was successful, the binary should be in /usr/src/packages/
SRPMS.
--short-circuit
Skip some steps.
The binary RPM created can now be installed with rpm -i or, preferably, with rpm
-U. Installation with rpm makes it appear in the RPM database.
95
96
Reference
Printer Operation
openSUSE supports printing with many types of printers, including remote network
printers. Printers can be configured with YaST or manually. For configuration instructions, refer to Section Setting Up a Printer (Chapter 2, Setting Up Hardware Components with YaST, Start-Up). Both graphical and command line utilities are available
for starting and managing print jobs. If your printer does not work as expected, refer
to Section 7.8, Troubleshooting (page 108).
CUPS is the standard print system in openSUSE. CUPS is highly user-oriented. In many
cases, it is compatible with LPRng or can be adapted with relatively little effort. LPRng
is included in openSUSE only for reasons of compatibility.
Printers can be distinguished by interface, such as USB or network, and printer language.
When buying a printer, make sure that the printer has an interface (like USB or parallel
port) that is available on your hardware and a suitable printer language. Printers can be
categorized on the basis of the following three classes of printer languages:
PostScript Printers
PostScript is the printer language in which most print jobs in Linux and Unix are
generated and processed by the internal print system. This language is already quite
old and very efficient. If PostScript documents can be processed directly by the
printer and do not need to be converted in additional stages in the print system, the
number of potential error sources is reduced. Because PostScript printers are subject
to substantial license costs, these printers usually cost more than printers without
a PostScript interpreter.
Printer Operation
97
98
Reference
Printer Operation
99
Reference
(modify) the standard because they test systems that have not implemented the standard
correctly or because they want to provide certain functions that are not available in the
standard. Manufacturers then provide drivers for only a few operating systems, eliminating difficulties with those systems. Unfortunately, Linux drivers are rarely provided.
The current situation is such that you cannot act on the assumption that every protocol
works smoothly in Linux. Therefore, you may have to experiment with various options
to achieve a functional configuration.
IMPORTANT: Remote Access Settings
By default, the cupsd only listens on internal network interfaces (localhost).
When setting up a CUPS network server you need to adjust the Listen directive in /etc/cups/cupsd.conf to listen to the outer network.
CUPS supports the socket, LPD, IPP, and smb protocols.
socket
Socket refers to a connection in which the data is sent to an Internet socket without
first performing a data handshake. Some of the socket port numbers that are commonly used are 9100 or 35. The device URI (uniform resource identifier) syntax
is socket://IP.of.the.printer:port, for example,
socket://192.168.2.202:9100/.
LPD (Line Printer Daemon)
The proven LPD protocol is described in RFC 1179. Under this protocol, some
job-related data, such as the ID of the printer queue, is sent before the actual print
data is sent. Therefore, a printer queue must be specified when configuring the
LPD protocol for the data transmission. The implementations of diverse printer
manufacturers are flexible enough to accept any name as the printer queue. If necessary, the printer manual should indicate what name to use. LPT, LPT1, LP1, or
similar names are often used. An LPD queue can also be configured on a different
Linux or Unix host in the CUPS system. The port number for an LPD service is
515. An example device URI is lpd://192.168.2.202/LPT1.
IPP (Internet Printing Protocol)
IPP is a relatively new (1999) protocol based on the HTTP protocol. With IPP,
more job-related data is transmitted than with the other protocols. CUPS uses IPP
for internal data transmission. This is the preferred protocol for a forwarding queue
between two CUPS servers. The name of the print queue is necessary to configure
Printer Operation
101
IPP correctly. The port number for IPP is 631. Example device URIs are
ipp://192.168.2.202/ps and ipp://192.168.2.202/printers/ps.
SMB (Windows Share)
CUPS also supports printing on printers connected to Windows shares. The protocol
used for this purpose is SMB. SMB uses the port numbers 137, 138, and 139.
Example device URIs are
smb://user:password@workgroup/smb.example.com/printer,
smb://user:[email protected]/printer, and
smb://smb.example.com/printer.
The protocol supported by the printer must be determined before configuration. If the
manufacturer does not provide the needed information, the command nmap, which
comes with the nmap package, can be used to guess the protocol. nmap checks a host
for open ports. For example:
nmap -p 35,137-139,515,631,9100-10000 printerIP
Then the device (-v) is available as queue (-p), using the specified PPD file (-P).
This means that you must know the PPD file and the name of the device to configure
the printer manually.
Do not use -E as the first option. For all CUPS commands, -E as the first argument
sets use of an encrypted connection. To enable the printer, -E must be used as shown
in the following example:
102
Reference
lpadmin -p ps -v parallel:/dev/lp0 -P \
/usr/share/cups/model/Postscript.ppd.gz -E
Example:
Resolution/Output Resolution: 150dpi *300dpi 600dpi
When a normal user runs lpoptions, the settings are written to ~/.lpoptions.
However, root settings are written to /etc/cups/lpoptions.
103
use depends on how the application transmits the datajust try which one results in
starting KPrinter. If set up correctly, the application should open the KPrinter dialog
whenever a print job is issued from it, so you can use the dialog to select a queue and
set other printing options. This requires that the application's own print setup does not
conflict with that of KPrinter and that printing options are only changed through
KPrinter after it has been enabled. More information on KPrinter is available in Section Printing (Chapter 1, Getting Started with the KDE Desktop, KDE User Guide).
CUPS Client
Normally a CUPS client runs on a regular workstation located in a network behind a
firewall. In this case it is recommended to configure the external network devices to
be in the Internal Zone, so the workstation is reachable from within the network.
104
Reference
CUPS Server
If the CUPS server is part of network protected by a firewall, the external network device
should be configured to be in the Internal Zone of the firewall. When being part
of the external zone, the TCP and UDP port 631 needs to be opened in order to make
the CUPS server available in the network.
and
<Location />
Order Deny,Allow
Deny From All
Allow From 127.0.0.1
Allow From 127.0.0.2
Allow From @LOCAL
</Location>
In this way, only LOCAL hosts can access cupsd on a CUPS server. LOCAL hosts are
hosts whose IP addresses belong to a non-PPP interface (interfaces whose
IFF_POINTOPOINT flags are not set) and whose IP addresses belong to the same
network as the CUPS server. Packets from all other hosts are rejected immediately.
Printer Operation
105
(page 105) are vital preconditions for this feature, because otherwise the security would
not be sufficient for an automatic activation of cupsd.
106
Reference
Printer Operation
107
tains only one PPD file for similar models, for example, if there is no separate PPD
file for the individual models of a model series, but the model name is specified in
a form like Funprinter 1000 series in the PPD file.
The Foomatic PostScript PPD file is not recommended. This may be because the
printer model does not operate efficiently enough in PostScript mode, for example,
the printer may be unreliable in this mode because it has too little memory or the
printer is too slow because its processor is too weak. Furthermore, the printer may
not support PostScript by default, for example, because PostScript support is only
available as an optional module.
If a PPD file from the manufacturer-PPDs package is suitable for a PostScript
printer, but YaST cannot configure it for these reasons, select the respective printer
model manually in YaST.
7.8 Troubleshooting
The following sections cover some of the most frequently encountered printer hardware
and software problems and ways to solve or circumvent these problems. Among the
topics covered are GDI printers, PPD files, and port configuration. Common network
printer problems, defective printouts, and queue handling are also addressed.
Reference
driver may always switch the printer back into GDI mode when printing from Windows).
For other GDI printers there are extension modules for a standard printer language
available.
Some manufacturers provide proprietary drivers for their printers. The disadvantage of
proprietary printer drivers is that there is no guarantee that these work with the installed
print system and that they are suitable for the various hardware platforms. In contrast,
printers that support a standard printer language do not depend on a special print system
version or a special hardware platform.
Instead of spending time trying to make a proprietary Linux driver work, it may be
more cost-effective to purchase a supported printer. This would solve the driver problem
once and for all, eliminating the need to install and configure special driver software
and obtain driver updates that may be required due to new developments in the print
system.
Printer Operation
109
110
Reference
If the connection to lpd cannot be established, lpd may not be active or there
may be basic network problems.
As the user root, use the following command to query a (possibly very long)
status report for queue on remote host, provided the respective lpd is active
and the host accepts queries:
echo -e "\004queue" \
| netcat -w 2 -p 722 host 515
If lpd does not respond, it may not be active or there may be basic network problems. If lpd responds, the response should show why printing is not possible on
the queue on host. If you receive a response like that in Example 7.2, Error
Message from lpd (page 111), the problem is caused by the remote lpd.
Example 7.2 Error Message from lpd
lpd: your host does not have line printer access
lpd: queue does not exist
printer: spooling disabled
printer: printing disabled
If a broadcasting CUPS network server exists, the output appears as shown in Example 7.3, Broadcast from the CUPS Network Server (page 112).
Printer Operation
111
The following command can be used to test if a TCP connection can be established
to cupsd (port 631) on host:
netcat -z host 631 && echo ok || echo failed
State
open
open
open
open
open
Service
telnet
http
printer
cups
jetdirect
This output indicates that the printer connected to the print server box can be addressed via TCP socket on port 9100. By default, nmap only checks a number of
112
Reference
to send character strings or files directly to the respective port to test if the printer
can be addressed on this port.
Printer Operation
113
host is different from the job number on the server. Because a print job is usually forwarded immediately, it cannot be deleted with the job number on the client host, because
the client cupsd regards the print job as completed as soon as it has been forwarded
to the server cupsd.
To delete the print job on the server, use a command such as lpstat -h
cups.example.com -o to determine the job number on the server, provided the
server has not already completed the print job (that is, sent it completely to the printer).
Using this job number, the print job on the server can be deleted:
cancel -h cups.example.com queue-jobnnumber
114
Reference
4 Reset the printer completely by switching it off for some time. Then insert the
paper and turn on the printer.
Printer Operation
115
117
The command sax2 creates the /etc/X11/xorg.conf file. This is the primary
configuration file of the X Window System. Find all the settings here concerning your
graphics card, mouse, and monitor.
The following sections describe the structure of the configuration file /etc/X11/
xorg.conf. It consists of several sections, each one dealing with a certain aspect of
the configuration. Each section starts with the keyword Section <designation>
and ends with EndSection. The following convention applies to all sections:
Section "designation"
entry 1
entry 2
entry n
EndSection
The section types available are listed in Table 8.1, Sections in /etc/X11/xorg.conf
(page 118).
Table 8.1
118
Sections in /etc/X11/xorg.conf
Type
Meaning
Files
The paths used for fonts and the RGB color table.
ServerFlags
Module
InputDevice
Input devices, like keyboards and special input devices (touchpads, joysticks, etc.), are configured in this section. Important
parameters in this section are Driver and the options defining
the Protocol and Device. You normally have one
InputDevice section per device attached to the computer.
Monitor
Reference
Type
Meaning
the monitor. This prevents too high frequencies from being sent
to the monitor by accident.
Modes
Device
Screen
Monitor, Device, and Screen are explained in more detail. Further information
about the other sections can be found in the manual pages of X.Org and xorg.conf.
There can be several different Monitor and Device sections in xorg.conf. Even
multiple Screen sections are possible. The ServerLayout section determines
which of these sections is used.
119
120
Depth determines the color depth to be used with this set of Display settings.
Possible values are 8, 15, 16, 24, and 32, though not all of these might be supported by all X server modules or resolutions.
The Modes section comprises a list of possible screen resolutions. The list is
checked by the X server from left to right. For each resolution, the X server
searches for a suitable Modeline in the Modes section. The Modeline depends
Reference
on the capability of both the monitor and the graphics card. The Monitor settings
determine the resulting Modeline.
The first resolution found is the Default mode. With Ctrl + Alt + + (on the
number pad), switch to the next resolution in the list to the right. With Ctrl + Alt
+ (on the number pad), switch to the previous. This enables you to vary the
resolution while X is running.
The last line of the Display subsection with Depth 16 refers to the size of
the virtual screen. The maximum possible size of a virtual screen depends on the
amount of memory installed on the graphics card and the desired color depth, not
on the maximum resolution of the monitor. If you omit this line, the virtual resolution is just the physical resolution. Because modern graphics cards have a large
amount of video memory, you can create very large virtual desktops. However,
you may no longer be able to use 3D functionality if you fill most of the video
memory with a virtual desktop. If, for example, the card has 16 MB of video
RAM, the virtual screen can take up to 4096x4096 pixels in size at 8-bit color
depth. Especially for accelerated cards, however, it is not recommended to use
all your memory for the virtual screen, because the card's memory is also used
for several font and graphics caches.
The Identifier line (here Screen[0]) gives this section a defined name
with which it can be uniquely referenced in the following ServerLayout section. The lines Device and Monitor specify the graphics card and the monitor
that belong to this definition. These are just links to the Device and Monitor
sections with their corresponding names or identifiers. These sections are discussed
in detail below.
121
Identifier
VendorName
Option
EndSection
"Device[0]"
"Matrox"
"sw_cursor"
The BusID refers to the PCI or AGP slot in which the graphics card is installed.
This matches the ID displayed by the command lspci. The X server needs details
in decimal form, but lspci displays these in hexadecimal form. The value of
BusID is automatically detected by SaX2.
The value of Driver is automatically set by SaX2 and specifies which driver to
use for your graphics card. If the card is a Matrox Millennium, the driver module
is called mga. The X server then searches through the ModulePath defined in
the Files section in the drivers subdirectory. In a standard installation, this
is the /usr/lib/xorg/modules/drivers directory. _drv.o is added to
the name, so, in the case of the mga driver, the driver file mga_drv.o is loaded.
The behavior of the X server or of the driver can also be influenced through additional
options. An example of this is the option sw_cursor, which is set in the device section.
This deactivates the hardware mouse cursor and depicts the mouse cursor using software.
Depending on the driver module, there are various options available, which can be
found in the description files of the driver modules in the directory /usr/share/
doc/package_name. Generally valid options can also be found in the manual pages
(man xorg.conf, man X.Org, and man 4 chips).
If the graphics card has multiple video connectors, it is possible to configure the different
devices of this single card as one single view. Use SaX2 to set up your graphics interface
this way.
122
Reference
for the respective resolution. The monitor properties, especially the allowed frequencies,
are stored in the Monitor section.
WARNING
Unless you have in-depth knowledge of monitor and graphics card functions,
do not change the modelines, because this could severely damage your monitor.
Those who try to develop their own monitor descriptions should be very familiar with
the documentation in /usr/share/X11/doc.
Manual specification of modelines is rarely required today. If you are using a modern
multisync monitor, the allowed frequencies and optimal resolutions can, as a rule, be
read directly from the monitor by the X server via DDC, as described in the SaX2
configuration section. If this is not possible for some reason, use one of the VESA
modes included in the X server. This will work with almost all graphics card and
monitor combinations.
123
<dir>~/.fonts</dir>
<include ignore_missing="yes">conf.d</include>
To install additional fonts systemwide, manually copy the font files to a suitable directory (as root), such as /usr/share/fonts/truetype. Alternatively, the task
can be performed with the KDE font installer in the KDE Control Center. The result is
the same.
Instead of copying the actual fonts, you can also create symbolic links. For example,
you may want to do this if you have licensed fonts on a mounted Windows partition
and want to use them. Subsequently, run SuSEconfig --module fonts .
SuSEconfig --module fonts executes the script /usr/sbin/
fonts-config, which handles the font configuration. For more information on this
script, refer to its manual page (man fonts-config ).
The procedure is the same for bitmap fonts, TrueType and OpenType fonts, and Type1
(PostScript) fonts. All these font types can be installed into any directory.
X.Org contains two completely different font systems: the old X11 core font system
and the newly designed Xft and fontconfig system. The following sections briefly describe these two systems.
124
Reference
The X11 core font system has a few inherent weaknesses. It is outdated and can no
longer be extended in a meaningful way. Although it must be retained for reasons of
backward compatibility, the more modern Xft and fontconfig system should be used if
at all possible.
For its operation, the X server needs to know which fonts are available and where in
the system it can find them. This is handled by a FontPath variable, which contains
the path to all valid system font directories. In each of these directories, a file named
fonts.dir lists the available fonts in this directory. The FontPath is generated
by the X server at start-up. It searches for a valid fonts.dir file in each of the
FontPath entries in the configuration file /etc/X11/xorg.conf. These entries
are found in the Files section. Display the actual FontPath with xset q. This
path may also be changed at runtime with xset. To add an additional path, use
xset +fp <path>. To remove an unwanted path, use xset -fp <path>.
If the X server is already active, newly installed fonts in mounted directories can be
made available with the command xset fp rehash. This command is executed by
SuSEconfig --module fonts. Because the command xset needs access to the
running X server, this only works if SuSEconfig --module fonts is started from
a shell that has access to the running X server. The easiest way to achieve this is to assume root permissions by entering su and the root password. su transfers the access
permissions of the user who started the X server to the root shell. To check if the
fonts were installed correctly and are available by way of the X11 core font system,
use the command xlsfonts to list all available fonts.
By default, openSUSE uses UTF-8 locales. Therefore, Unicode fonts should be preferred
(font names ending with iso10646-1 in xlsfonts output). All available Unicode
fonts can be listed with xlsfonts | grep iso10646-1. Nearly all Unicode fonts
available in openSUSE contain at least the glyphs needed for European languages
(formerly encoded as iso-8859-*).
8.2.2 Xft
From the outset, the programmers of Xft made sure that scalable fonts including antialiasing are supported well. If Xft is used, the fonts are rendered by the application
using the fonts, not by the X server as in the X11 core font system. In this way, the respective application has access to the actual font files and full control of how the glyphs
are rendered. This constitutes the basis for the correct display of text in a number of
125
languages. Direct access to the font files is very useful for embedding fonts for printing
to make sure that the printout looks the same as the screen output.
In openSUSE, the two desktop environments KDE and GNOME, Mozilla, and many
other applications already use Xft by default. Xft is already used by more applications
than the old X11 core font system.
Xft uses the fontconfig library for finding fonts and influencing how they are rendered.
The properties of fontconfig are controlled by the global configuration file /etc/
fonts/fonts.conf and the user-specific configuration file ~/.fonts.conf.
Each of these fontconfig configuration files must begin with
<?xml version="1.0"?>
<!DOCTYPE fontconfig SYSTEM "fonts.dtd">
<fontconfig>
To add directories to search for fonts, append lines such as the following:
<dir>/usr/local/share/fonts/</dir>
However, this is usually not necessary. By default, the user-specific directory ~/.fonts
is already entered in /etc/fonts/fonts.conf. Accordingly, all you need to do
to install additional fonts is to copy them to ~/.fonts.
You can also insert rules that influence the appearance of the fonts. For example, enter
<match target="font">
<edit name="antialias" mode="assign">
<bool>false</bool>
</edit>
</match>
126
Reference
Because nearly all applications use these aliases by default, this affects almost the entire
system. Thus, you can easily use your favorite fonts almost everywhere without having
to modify the font settings in the individual applications.
Use the command fc-list to find out which fonts are installed and available for use.
For instance, the command fc-list returns a list of all fonts. To find out which of
the available scalable fonts (:scalable=true) contain all glyphs required for Hebrew
(:lang=he), their font names (family), their style (style), their weight (weight),
and the name of the files containing the fonts, enter the following command:
fc-list ":lang=he:scalable=true" family style weight
127
FreeMonoOblique.ttf: FreeMono:style=Oblique:weight=80
FreeMono.ttf: FreeMono:style=Medium:weight=80
FreeSans.ttf: FreeSans:style=Medium:weight=80
FreeSerifBold.ttf: FreeSerif:style=Bold:weight=200
FreeSansBoldOblique.ttf: FreeSans:style=BoldOblique:weight=200
FreeMonoBold.ttf: FreeMono:style=Bold:weight=200
128
Parameters of fc-list
Parameter
family
foundry
style
lang
weight
slant
file
outline
scalable
bitmap
pixelsize
Reference
129
Xen Virtualization
This chapter describes and explains the components and technologies you need to understand to set up and manage a Xen-based virtualization environment. It contains the
following sections:
For the latest Novell virtualization documentation, see http://www.novell.com/
documentation/vmserver/.
Xen Virtualization
131
132
Reference
Xen Virtualization
133
On the left, the virtual machine hosts desktop (domain 0) is shown running a SUSE
Linux operating system. The two virtual machines in the middle are shown running
paravirtualized openSUSE systems. The virtual machine on the right shows a fully
virtual machine running an unmodified operating system, such as Windows Server
2003 or Windows XP.
134
Reference
Hardware Requirements
In most cases, the hardware requirements for the virtual machine host are the same as
those for the openSUSE system, but additional CPU, disk, memory, and network resources should be added to accommodate the resource demands of all planned virtual
machines.
TIP
Remember that virtual machines, just like a physical machines, perform better
when they run on faster processors and have access to more system memory.
The following table lists the minimum hardware requirements for running openSUSE
with a Linux virtual machine.
Table 9.1
Hardware Requirements
System Component
Minimum Requirements
Recommended Requirements
Computer
Xen Virtualization
135
System Component
Minimum Requirements
Recommended Requirements
Intel EM64T or higher processor.
Memory
512 MB
7 GB or more of available,
unpartitioned disk space for
the host.
3.7 GB of available, unpartitioned disk space for each Lin- 10 GB or more of available,
ux virtual machine
unpartitioned disk space. Additional disk space might be required depending on which
components are selected and
how they are used.
CD-ROM Drive
4X CD-ROM drive
Hard Drive
20 GB
Network Board
IP Address
Mouse
136
Reference
N/A
USB or PS/2
System Component
Minimum Requirements
Server Computer
BIOS
Recommended Requirements
Software Requirements
The virtual machine host requires the following software packages and their dependencies to be installed:
kernel-xen
xen
xen-tools
xen-tools-ioemu (required for full-virtualization mode)
kernel-xenpae (used instead of kernel-xen, this package is required to enable a 32bit virtual machine host to access memory over 3 GB)
yast2-vm
Updates are available through your update channel. Make sure to update to the most
recent packages available.
137
138
Reference
Xen Virtualization
139
9.3.3 Prerequisites
Before creating a virtual machine, you need the following:
If you want to use an automated installation file (AutoYaST, in openSUSE), you
should create and download it to a directory on the host machine server or make it
available on the network.
If you are installing openSUSE, you need a network installation source. For procedures to create the installation sources, see Section 1.2, Setting Up the Server
Holding the Installation Sources (page 12).
140
Reference
Summary
On the summary page you can click on any of the headings to edit the information. As
you edit the information in the Summary, consult the Novell Virtualization Technology:
Guest Operating System Guide [http://www.novell.com/documentation/
Xen Virtualization
141
vmserver/guest_os_sp1/data/bookinfo.html#bookinfo] for instructions specific to the operating system you are installing.
Virtualization Method
The Virtualization Method page allows you to select the type of virtualization you want
to implement.
If your computer supports hardware-assisted virtualization, you can create a virtual
machine that runs in fully virtual mode. If you are installing an operating system that
is modified for virtualization, you can create a virtual machine that runs in paravirtual
mode. For more information about virtualization modes, see Section 9.1.1, Understanding Virtualization Modes (page 133).
Hardware
The Hardware page allows you to specify the amount of memory and number of virtual
processors for your virtual machine.
Initial Memory:
The amount of memory initially allocated to the virtual machine (specified in
megabytes).
Maximum Memory:
The largest amount of memory the VM will ever need.
Virtual Processors:
If desired, you can specify that the virtual machine has more virtual CPUs than the
number of physical CPUs. You can specify up to 32 virtual CPUs: however, for
best performance, the number of virtual processors should be less than or equal to
the number of physical processors.
142
Reference
Graphics
No Graphics Support
The virtual machine operates like a server without a monitor. You can access the
operating system through operating system supported services, such as SSH or
VNC.
Paravirtualized Graphics Adapter
Requires that an appropriate graphics driver is installed in the operating system.
Disks
A virtual machine must have at least one virtual disk. Virtual disks can be:
File backed, which means that the virtual disk is a single image file on a larger
physical disk.
A sparse image file, which means that the virtual disk is a single image file, but
the space is not pre-allocated.
Configured from a block device, such as an entire disk, partition, or volume.
For best performance, create each virtual disk from an entire disk or a partition. For the
next best performance, create an image file but do not create it as a sparse image file.
A virtual disk based on a sparse image file delivers the most disk-space flexibility but
the slowest installation and disk access speeds.
By default, a single, file-backed 4 GB virtual disk is created as a sparse image file in
/var/lib/xen/images/ vm_namewhere vm_name is the name of the virtual
machine. You can change this configuration to meet your specific requirements.
Xen Virtualization
143
chine as the first disk device. Paravirtual disks appear as generic disks (not IDE,
SCSI, etc.). Fully virtualized disks appear to the guest as IDE. In all cases, naming
is left up to the guest OS.
Source
Depending on your specific requirements, you might need to change the location
where the disk image file is created and stored, or specify which disk, partition, or
volume to use.
Sparse Image File
A virtual disk based on a sparse image file does not consume the entire amount of
disk space specified but uses disk space only as needed. This is a good option for
quickly creating virtual disks, but for best performance, you might want to preallocate the disk space by deselecting Create Sparse Image File.
Read-Only Access
A virtual disk can be safely shared among multiple virtual machines only if every
use of the virtual disk is marked as Read Only.
Network Adapters
By default, a single virtual network card is created for the virtual machine. It has a
randomly generated MAC address that you can change to fit your desired configuration.
You can also create additional virtual network cards. In paravirtual machines, virtual
network cards communicate by using a generic network card driver compatible with
Xen.
144
Reference
If you are installing a paravirtual machine's operating system from CD, you should remove the virtual CD reader from the virtual machine after completing the installation
because the virtual machine assumes that the original CD is still in the CD reader, even
if it is ejected. If it is ejected, the virtual machine cannot access the CD (or any other
newly inserted CD) and receives I/O errors.
For instructions on removing the virtual CD reader, see Virtual CD Readers [http://
www.novell.com/documentation/vmserver/config_options/data/
b9rtimf.html#b9rtimf] in Configuration Options and Settings [http://www
.novell.com/documentation/vmserver/config_options/data/
bookinfo.html#bookinfo] for more information.
If the installation program is capable of recognizing an installation profile, response
file, or script, you can automate the installation settings by specifying the location of
the profile, response file, or script you want to use. For example, openSUSE uses an
AutoYaST profile.
You can also pass instructions to the kernel at install time by entering parameters for
the Additional Arguments field.
For example, on openSUSE, if you wanted to specify the parameters for an IP address
of 192.35.1.10, a netmask of 255.255.255.0, a gateway of 192.35.1.254
for the virtual server, and use SSH to access installation, you could enter the following
parameters in the Additional Argument field:
hostip=192.35.1.10 netmask=255.255.255.0 gateway=192.35.1.254 \
usessh=1 sshpassword=<password>
When you have finished entering all the information in the Operating System Installation
page, click Apply to return to the Summary page.
Xen Virtualization
145
Selecting a virtual machine and clicking Open displays the virtual machine window
showing the virtual machines current state.
146
Reference
Clicking Run on the virtual machine window boots the virtual machine and displays
the user interface or text console running on the virtual machine.
Selecting a virtual machine and clicking Details lets you view performance and
configure hardware details associated with the virtual machine.
Clicking New in Virtual Machine Manager launches the Create Virtual Machine
Wizard, which walks you through the steps required to set up a virtual machine.
The wizard guides you through the process of defining the virtual machine settings and
installing the operating system.
Figure 9.3 Virtual Machine Summary Screen
After specifying the settings on the Summary screen, the wizard starts the virtual machine
and launches the operating system installation program, which guides you through the
process of installation.
Xen Virtualization
147
148
Reference
4 Enter xm start vm_name to start the virtual machine with the new settings.
NOTE
It is no longer recommended that you edit the initial creation files stored in
etc/xen/vm, which are used only during the creation of a new virtual machine.
Xen Virtualization
149
10
A number of programs and mechanisms, some of which are presented here, can be used
to examine the status of your system. Also described are some utilities that are useful
for routine work, along with their most important parameters.
For each of the commands introduced, examples of the relevant outputs are presented.
In these examples, the first line is the command itself (after the > or # sign prompt).
Omissions are indicated with square brackets ([...]) and long lines are wrapped
where necessary. Line breaks for long lines are indicated by a backslash (\).
# command -x -y
output line 1
output line 2
output line 3 is annoyingly long, so long that \
we have to break it
output line 3
[...]
output line 98
output line 99
The descriptions have been kept short to allow as many utilities as possible to be mentioned. Further information for all the commands can be found in the man pages. Most
of the commands also understand the parameter --help, which produces a brief list
of the possible parameters.
151
10.1 Debugging
10.1.1 Specifying the Required Library: ldd
Use the command ldd to find out which libraries would load the dynamic executable
specified as argument.
tux@mercury:~> ldd /bin/ls
linux-gate.so.1 => (0xffffe000)
librt.so.1 => /lib/librt.so.1 (0xb7f97000)
libacl.so.1 => /lib/libacl.so.1 (0xb7f91000)
libc.so.6 => /lib/libc.so.6 (0xb7e79000)
libpthread.so.0 => /lib/libpthread.so.0 (0xb7e67000)
/lib/ld-linux.so.2 (0xb7fb6000)
libattr.so.1 => /lib/libattr.so.1 (0xb7e63000)
152
Reference
function
-------------------__errno_location
__fprintf_chk
strlen
readdir64
__ctype_get_mb_cur_max
memcpy
textdomain
-------------------total
For example, to trace all attempts to open a particular file, use the following:
tux@mercury:~> strace -e open ls .bashrc
open("/etc/ld.so.cache", O_RDONLY)
=
open("/lib/librt.so.1", O_RDONLY)
=
open("/lib/libacl.so.1", O_RDONLY)
=
open("/lib/libc.so.6", O_RDONLY)
=
open("/lib/libpthread.so.0", O_RDONLY) =
open("/lib/libattr.so.1", O_RDONLY)
=
[...]
3
3
3
3
3
3
To trace all the child processes, use the parameter -f. The behavior and output format
of strace can be largely controlled. For information, see man strace.
153
The parameter -f list specifies a file with a list of filenames to examine. The -z
allows file to look inside compressed files:
tux@mercury:~> file /usr/share/man/man1/file.1.gz
usr/share/man/man1/file.1.gz: gzip compressed data, from Unix, max compression
tux@mercury:~> file -z /usr/share/man/man1/file.1.gz
/usr/share/man/man1/file.1.gz: ASCII troff or preprocessor input text \
(gzip compressed data, from Unix, max compression)
154
Reference
Obtain information about total usage of the file systems with the command df. The
parameter -h (or --human-readable) transforms the output into a form understandable for common users.
tux@mercury:~> df -h
Filesystem
/dev/sda3
udev
/dev/sda1
/dev/sda4
Size
11G
252M
16M
27G
Display the total size of all the files in a given directory and its subdirectories with the
command du. The parameter -s suppresses the output of detailed information. -h
again transforms the data into a human-readable form:
tux@mercury:~> du -sh /local
1.7M
/local
155
regular file
0/
root)
The parameter --filesystem produces details of the properties of the file system
in which the specified file is located:
tux@mercury:~> stat /etc/profile --filesystem
File: "/etc/profile"
ID: 0
Namelen: 255
Type: reiserfs
Block size: 4096
Fundamental block size: 4096
Blocks: Total: 2622526
Free: 1809771
Available: 1809771
Inodes: Total: 0
Free: 0
156
Reference
Information about device name resolution is obtained from the file /usr/share/
pci.ids. PCI IDs not listed in this file are marked Unknown device.
The parameter -vv produces all the information that could be queried by the program.
To view the pure numeric values, use the parameter -n.
157
The option -d puts out a defects list with two tables of bad blocks of a hard disk: first
the one supplied by the vendor (manufacturer table) and second the list of bad blocks
that appeared in operation (grown table). If the number of entries in the grown table
increases, it might be a good idea to replace the hard disk.
158
Reference
10.4 Networking
10.4.1 Show the Network Status: netstat
netstat shows network connections, routing tables (-r), interfaces (-i), masquerade
connections (-M), multicast memberships (-g), and statistics (-s).
tux@mercury:~> netstat -r
Kernel IP routing table
Destination
Gateway
192.168.2.0
*
link-local
*
loopback
*
default
192.168.2.254
Genmask
255.255.254.0
255.255.0.0
255.0.0.0
0.0.0.0
tux@mercury:~> netstat -i
Kernel Interface table
Iface
MTU Met
RX-OK RX-ERR RX-DRP RX-OVR
eth0
1500
0 1624507 129056
0
0
lo
16436
0
23728
0
0
0
Flags
U
U
U
UG
MSS
0
0
0
0
Window
0
0
0
0
irtt
0
0
0
0
Iface
eth0
eth0
lo
eth0
When displaying network connections or statistics, you can specify the socket type to
display: TCP (-t), UDP (-u), or raw (-r). The -p option shows the PID and name
of the program to which each socket belongs.
The following example lists all TCP connections and the programs using these connections.
mercury:~ # netstat -t -p
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address
State
tcp
0
tcp
0
19422/s
tcp
0
ESTABLISHED -
0 mercury:33513
352 mercury:ssh
0 localhost:ssh
PID/Pro
159
Query the allocation and use of interrupts with the following command:
tux@mercury:~> cat /proc/interrupts
CPU0
0:
3577519
XT-PIC timer
1:
130
XT-PIC i8042
2:
0
XT-PIC cascade
5:
564535
XT-PIC Intel 82801DB-ICH4
7:
1
XT-PIC parport0
8:
2
XT-PIC rtc
9:
1
XT-PIC acpi, uhci_hcd:usb1, ehci_hcd:usb4
10:
0
XT-PIC uhci_hcd:usb3
11:
71772
XT-PIC uhci_hcd:usb2, eth0
12:
101150
XT-PIC i8042
14:
33146
XT-PIC ide0
15:
149202
XT-PIC ide1
NMI:
0
LOC:
0
ERR:
0
MIS:
0
160
Reference
161
The address assignment of executables and libraries is contained in the maps file:
tux@mercury:~> cat /proc/self/maps
08048000-0804c000 r-xp 00000000 03:03
0804c000-0804d000 rw-p 00004000 03:03
0804d000-0806e000 rw-p 0804d000 00:00
b7d27000-b7d5a000 r--p 00000000 03:03
b7d5a000-b7e32000 r--p 00000000 03:03
b7e32000-b7e33000 rw-p b7e32000 00:00
b7e33000-b7f45000 r-xp 00000000 03:03
b7f45000-b7f46000 r--p 00112000 03:03
b7f46000-b7f48000 rw-p 00113000 03:03
b7f48000-b7f4c000 rw-p b7f48000 00:00
b7f52000-b7f53000 r--p 00000000 03:03
[...]
b7f5b000-b7f61000 r--s 00000000 03:03
b7f61000-b7f62000 r--p 00000000 03:03
b7f62000-b7f76000 r-xp 00000000 03:03
b7f76000-b7f78000 rw-p 00013000 03:03
bfd61000-bfd76000 rw-p bfd61000 00:00
ffffe000-fffff000 ---p 00000000 00:00
17753
17753
0
11867
11868
0
8837
8837
8837
0
11842
/bin/cat
/bin/cat
[heap]
/usr/lib/locale/en_GB.utf8/
/usr/lib/locale/en_GB.utf8/
9109
9720
8828
8828
0
0
/usr/lib/gconv/gconv-module
/usr/lib/locale/en_GB.utf8/
/lib/ld-2.3.6.so
/lib/ld-2.3.6.so
[stack]
[vdso]
/lib/libc-2.3.6.so
/lib/libc-2.3.6.so
/lib/libc-2.3.6.so
/usr/lib/locale/en_GB.utf8/
10.5.1 procinfo
Important information from the /proc file system is summarized by the command
procinfo:
tux@mercury:~> procinfo
Linux 2.6.18.8-0.5-default (geeko@buildhost) (gcc 4.1.2 20061115) #1 2CPU
Memory:
Mem:
Swap:
Total
2060604
2104472
Used
2011264
112
Free
49340
2104360
162
Reference
2:43:13.78
1d 22:21:27.87
13:39:57.57
18:02:18.59
0:03:39.44
1:15:35.25
9d 16:07:56.79
6d 13:07:11.14
0.8%
14.7%
4.3%
5.7%
0.0%
0.4%
73.8%
0: 141399308 timer
1:
73784 i8042
4:
2
6:
5 floppy [2]
7:
2
Shared
0
Buffers
200664
14:
50:
58:
66:
74:
5074312
1938076
0
872711
15
disk 1:
2827023r 968
ide0
uhci_hcd:usb1, ehci_
uhci_hcd:usb2
uhci_hcd:usb3, HDA I
uhci_hcd:usb4
irq 8:
irq 9:
irq 12:
0 rtc
0 acpi
3
PCI-MSI
PCI-MSI
To see all the information, use the parameter -a. The parameter -nN produces updates
of the information every N seconds. In this case, terminate the program by pressing Q.
By default, the cumulative values are displayed. The parameter -d produces the differential values. procinfo -dn5 displays the values that have changed in the last five
seconds:
10.6 Processes
10.6.1 Interprocess Communication: ipcs
The command ipcs produces a list of the IPC resources currently in use:
------ Shared Memory Segments -------key
shmid
owner
perms
0x00000000 58261504
tux
600
0x00000000 58294273
tux
600
0x00000000 83886083
tux
666
0x00000000 83951622
tux
666
0x00000000 83984391
tux
666
0x00000000 84738056
root
644
bytes
393216
196608
43264
192000
282464
151552
perms
nattch
2
2
2
2
2
status
dest
dest
dest
nsems
used-bytes
messages
163
To list all processes with user and command line information, use ps axu:
tux@mercury:~> ps axu
USER
PID %CPU %MEM
VSZ
RSS TTY
root
1 0.0 0.0
696
272 ?
root
2 0.0 0.0
0
0 ?
root
3 0.0 0.0
0
0 ?
[...]
tux
4047 0.0 6.0 158548 31400 ?
tux
4057 0.0 0.7
9036 3684 ?
tux
4067 0.0 0.1
2204
636 ?
tux
4072 0.0 1.0 15996 5160 ?
tux
4114 0.0 3.7 130988 19172 ?
tux
4818 0.0 0.3
4192 1812 pts/0
tux
4959 0.0 0.1
2324
816 pts/0
STAT
S
SN
S<
Ssl
Sl
S
Ss
SLl
Ss
R+
START
12:59
12:59
12:59
13:02
13:02
13:02
13:02
13:06
15:59
16:17
TIME
0:01
0:00
0:00
0:06
0:00
0:00
0:00
0:04
0:00
0:00
COMMAND
init [5]
[ksoftirqd
[events
mono-best
/opt/gnome
/opt/gnome
gnome-scre
sound-juic
-bash
ps axu
To check how many sshd processes are running, use the option -p together with the
command pidof, which lists the process IDs of the given processes.
tux@mercury:~>
PID TTY
3524 ?
4813 ?
4817 ?
ps -p `pidof sshd`
STAT
TIME COMMAND
Ss
0:00 /usr/sbin/sshd -o PidFile=/var/run/sshd.init.pid
Ss
0:00 sshd: tux [priv]
R
0:00 sshd: tux@pts/0
The process list can be formatted according to your needs. The option -L returns a list
of all keywords. Enter the following command to issue a list of all processes sorted by
memory usage:
tux@mercury:~> ps ax --format pid,rss,cmd --sort rss
PID
RSS CMD
2
0 [ksoftirqd/0]
3
0 [events/0]
4
0 [khelper]
5
0 [kthread]
11
0 [kblockd/0]
12
0 [kacpid]
472
0 [pdflush]
473
0 [pdflush]
[...]
4028 17556 nautilus --no-default-window --sm-client-id default2
4118 17800 ksnapshot
4114 19172 sound-juicer
4023 25144 gnome-panel --sm-client-id default1
4047 31400 mono-best --debug /usr/lib/beagle/Best.exe --autostarted
3973 31520 mono-beagled --debug /usr/lib/beagle/BeagleDaemon.exe --bg --aut
164
Reference
The parameter -p adds the process ID to a given name. To have the command lines
displayed as well, use the -a parameter:
165
tux@mercury:~> top -n 1
top - 17:06:28 up 2:10, 5 users, load average: 0.00, 0.00, 0.00
Tasks: 85 total,
1 running, 83 sleeping,
1 stopped,
0 zombie
Cpu(s): 5.5% us, 0.8% sy, 0.8% ni, 91.9% id, 1.0% wa, 0.0% hi, 0.0% si
Mem:
515584k total,
506468k used,
9116k free,
66324k buffers
Swap:
658656k total,
0k used,
658656k free,
353328k cached
PID
1
2
3
4
5
11
12
472
473
475
474
681
839
923
1343
1587
1746
1752
2151
2165
2166
2171
2235
2289
2403
2709
2714
USER
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
messageb
root
root
root
root
root
root
root
PR
16
34
10
10
10
10
20
20
15
11
15
10
10
13
10
20
15
15
16
16
15
16
15
16
23
19
16
NI
0
19
-5
-5
-5
-5
-5
0
0
-5
0
-5
-5
-4
-5
0
0
0
0
0
0
0
0
0
0
0
0
TIME+
0:01.33
0:00.00
0:00.27
0:00.01
0:00.00
0:00.05
0:00.00
0:00.00
0:00.06
0:00.00
0:00.07
0:00.01
0:00.02
0:00.67
0:00.00
0:00.00
0:00.00
0:00.00
0:00.00
0:00.64
0:00.01
0:00.00
0:00.10
0:02.05
0:00.00
0:00.00
0:00.56
COMMAND
init
ksoftirqd/0
events/0
khelper
kthread
kblockd/0
kacpid
pdflush
pdflush
aio/0
kswapd0
kseriod
reiserfs/0
udevd
khubd
shpchpd_event
w1_control
w1_bus_master1
acpid
dbus-daemon
syslog-ng
klogd
resmgrd
hald
hald-addon-acpi
NetworkManagerD
hald-addon-stor
If you press F while top is running, a menu opens with which to make extensive changes
to the format of the output.
The parameter -U UID monitors only the processes associated with a particular user.
Replace UID with the user ID of the user. top -U `id -u` returns the UID of the
user on the basis of the username and displays his processes.
166
Reference
used
501704
94072
0
free
13880
421512
658656
shared
0
buffers
73040
cached
334592
The options -b,-k,-m,-g show output in bytes, KB, MB, or GB, respectively. The
parameter -d delay ensures that the display is refreshed every delay seconds. For
example, free -d 1.5 produces an update every 1.5 seconds.
/mnt/notes.txt
USER
tux
167
Following termination of the less process, which was running on another terminal,
the file system can successfully be unmounted.
168
Reference
DEVICE
SIZE
NODE NAME
3,3
1512 117619 /home/tux
3,3
584
2 /
3,3 498816 13047 /bin/bash
0,0
0 [heap] (stat: No such
3,3 217016 115687 /var/run/nscd/passwd
3,3 208464 11867 /usr/lib/locale/en_GB.
3,3 882134 11868 /usr/lib/locale/en_GB.
3,3 1386997
8837 /lib/libc-2.3.6.so
3,3
13836
8843 /lib/libdl-2.3.6.so
3,3 290856 12204 /lib/libncurses.so.5.5
3,3
26936 13004 /lib/libhistory.so.5.1
3,3 190200 13006 /lib/libreadline.so.5.
3,3
54 11842 /usr/lib/locale/en_GB.
bash
bash
bash
bash
bash
bash
bash
bash
bash
bash
bash
bash
bash
bash
bash
5552
5552
5552
5552
5552
5552
5552
5552
5552
5552
5552
5552
5552
5552
5552
tux
tux
tux
tux
tux
tux
tux
tux
tux
tux
tux
tux
tux
tux
tux
mem
mem
mem
mem
mem
mem
mem
mem
mem
mem
mem
0u
1u
2u
255u
REG
REG
REG
REG
REG
REG
REG
REG
REG
REG
REG
CHR
CHR
CHR
CHR
3,3
3,3
3,3
3,3
3,3
3,3
3,3
3,3
3,3
3,3
3,3
136,5
136,5
136,5
136,5
2375
290
52
34
62
127
56
23
21544
366
97165
11663
11736
11831
11862
11839
11664
11735
11866
9109
9720
8828
7
7
7
7
/usr/lib/locale/en_GB.
/usr/lib/locale/en_GB.
/usr/lib/locale/en_GB.
/usr/lib/locale/en_GB.
/usr/lib/locale/en_GB.
/usr/lib/locale/en_GB.
/usr/lib/locale/en_GB.
/usr/lib/locale/en_GB.
/usr/lib/gconv/gconv-m
/usr/lib/locale/en_GB.
/lib/ld-2.3.6.so
/dev/pts/5
/dev/pts/5
/dev/pts/5
/dev/pts/5
The special shell variable $$, whose value is the process ID of the shell, has been used.
The command lsof lists all the files currently open when used without any parameters.
Because there are often thousands of open files, listing all of them is rarely useful.
However, the list of all files can be combined with search functions to generate useful
lists. For example, list all used character devices:
tux@mercury:~> lsof | grep CHR
bash
3838
tux
0u
bash
3838
tux
1u
bash
3838
tux
2u
bash
3838
tux 255u
bash
5552
tux
0u
bash
5552
tux
1u
bash
5552
tux
2u
bash
5552
tux 255u
X
5646
root mem
lsof
5673
tux
0u
lsof
5673
tux
2u
grep
5674
tux
1u
grep
5674
tux
2u
CHR
CHR
CHR
CHR
CHR
CHR
CHR
CHR
136,0
136,0
136,0
136,0
136,5
136,5
136,5
136,5
CHR
1,1
CHR 136,5
CHR 136,5
CHR 136,5
CHR 136,5
2 /dev/pts/0
2 /dev/pts/0
2 /dev/pts/0
2 /dev/pts/0
7 /dev/pts/5
7 /dev/pts/5
7 /dev/pts/5
7 /dev/pts/5
1006 /dev/mem
7 /dev/pts/5
7 /dev/pts/5
7 /dev/pts/5
7 /dev/pts/5
169
UEVENT[1138806687]
UEVENT[1138806687]
UEVENT[1138806687]
UEVENT[1138806687]
UDEV [1138806687]
UDEV [1138806687]
UDEV [1138806687]
UDEV [1138806687]
UEVENT[1138806692]
UEVENT[1138806692]
UEVENT[1138806692]
UEVENT[1138806692]
UDEV [1138806693]
UDEV [1138806693]
UDEV [1138806693]
UDEV [1138806693]
UEVENT[1138806694]
UDEV [1138806694]
UEVENT[1138806694]
UEVENT[1138806697]
add@/devices/pci0000:00/0000:00:1d.7/usb4/4-2/4-2.2
add@/devices/pci0000:00/0000:00:1d.7/usb4/4-2/4-2.2/4-2.2
add@/class/scsi_host/host4
add@/class/usb_device/usbdev4.10
add@/devices/pci0000:00/0000:00:1d.7/usb4/4-2/4-2.2
add@/devices/pci0000:00/0000:00:1d.7/usb4/4-2/4-2.2/4-2.2
add@/class/scsi_host/host4
add@/class/usb_device/usbdev4.10
add@/devices/pci0000:00/0000:00:1d.7/usb4/4-2/4-2.2/4-2.2
add@/block/sdb
add@/class/scsi_generic/sg1
add@/class/scsi_device/4:0:0:0
add@/devices/pci0000:00/0000:00:1d.7/usb4/4-2/4-2.2/4-2.2
add@/class/scsi_generic/sg1
add@/class/scsi_device/4:0:0:0
add@/block/sdb
add@/block/sdb/sdb1
add@/block/sdb/sdb1
mount@/block/sdb/sdb1
umount@/block/sdb/sdb1
170
res-base Wins
3e00000
385
4600000
391
1600000
35
3400000
52
2c00000
50
2e00000
50
2600000
37
4800000
37
2a00000
209
1800000
182
1400000
157
3c00000
175
3a00000
326
0a00000
85
4e00000
25
2400000
11
0e00000
20
Other
13K
33K
4K
4K
3K
3K
3K
3K
12K
12K
18K
9K
20K
9K
3K
1K
3K
Reference
Pxm mem
18161K
4566K
3811K
2816K
2374K
2341K
1772K
1772K
1111K
1039K
777K
510K
486K
102K
63K
53K
50K
42219K total
Total
PID Identifier
18175K
?
NOVELL: SU
4600K
?
amaroK - S
3816K
?
KDE Deskto
2820K
?
Linux Shel
2378K
?
Linux Shel
2344K
?
Linux Shel
1775K
?
Root - Kon
1775K
?
Root - Kon
1123K
?
Trekstor25
1052K
?
kicker
796K
?
kwin
520K
?
de.comp.la
506K
?
[opensuse111K
?
Kopete
66K
?
YaST Contr
55K 22061 suseplugge
54K 22016 kded
3200000
2200000
4400000
1a00000
3800000
1e00000
3600000
2000000
3000000
6
54
2
255
2
10
106
10
21
41
9
11
7
14
7
6
5
7
5
1
1
0
1
0
1
0
0
72
30
30
42
34
42
30
21
11
84
31
34
11
37
9
9
34
9
40K
42K
34K
19K
21K
15K
7K
9K
7K
8K
3K
2K
6K
2K
624B
3K
1K
888B
48K
?
EMACS
45K
?
SUSEWatche
36K 16489 kdesu
26K
?
KMix
24K 22242 knotify
15K
?
KPowersave
11K 22236 konqueror
10K
?
klipper
8K
?
KDE Wallet
If any users of other systems have logged in remotely, the parameter -f shows the
computers from which they have established the connection.
0m4.051s
0m0.042s
0m0.205s
171
11
openSUSE is available for 64-bit platforms. This does not necessarily mean that all
the applications included have already been ported to 64-bit platforms. openSUSE
supports the use of 32-bit applications in a 64-bit system environment. This chapter
offers a brief overview of how this support is implemented on 64-bit openSUSE platforms. It explains how 32-bit applications are executed (runtime support) and how 32bit applications should be compiled to enable them to run both in 32-bit and 64-bit
system environments. Additionally, find information about the kernel API and an explanation of how 32-bit applications can run under a 64-bit kernel.
openSUSE for the 64-bit platforms amd64 and Intel 64 is designed so that existing 32bit applications run in the 64-bit environment out-of-the-box. This support means
that you can continue to use your preferred 32-bit applications without waiting for a
corresponding 64-bit port to become available.
175
176
Reference
rpmname-devel packages and the development libraries for the second architecture
from rpmname-devel-32bit.
Most open source programs use an autoconf-based program configuration. To use
autoconf for configuring a program for the second architecture, overwrite the normal
compiler and linker settings of autoconf by running the configure script with
additional environment variables.
The following example refers to an x86_64 system with x86 as the second architecture.
1 Use the 32-bit compiler:
CC="gcc -m32"
2 Instruct the linker to process 32-bit objects (always use gcc as the linker frontend):
LD="gcc -m32"
4 Determine that the libraries for libtool and so on come from /usr/lib:
LDFLAGS="-L/usr/lib"
Not all of these variables are needed for every program. Adapt them to the respective
program.
CC="gcc -m32"
\
LDFLAGS="-L/usr/lib;" \
.configure
\
--prefix=/usr \
--libdir=/usr/lib
make
make install
177
178
Reference
12
Booting a Linux system involves various different components. The hardware itself is
initialized by the BIOS, which starts the kernel by means of a boot loader. After this
point, the boot process with init and the runlevels is completely controlled by the operating system. The runlevel concept enables you to maintain setups for everyday usage
as well as to perform maintenance tasks on the system.
179
information about GRUB, the Linux boot loader, can be found in Chapter 13, The
Boot Loader (page 195).
3. Kernel and initramfs To pass system control, the boot loader loads both the
kernel and an initial RAMbased file system (initramfs) into memory. The contents
of the initramfs can be used by the kernel directly. initramfs contains a small executable called init that handles the mounting of the real root file system. In former
versions of SUSE Linux, these tasks were handled by initrd and linuxrc, respectively. For more information about initramfs, refer to Section 12.1.1, initramfs
(page 180).
4. init on initramfs This program performs all actions needed to mount the
proper root file system, like providing kernel functionality for the needed file
system and device drivers for mass storage controllers with udev. After the root
file system has been found, it is checked for errors and mounted. If this has been
successful, the initramfs is cleaned and the init program on the root file system is
executed. For more information about init, refer to Section 12.1.2, init on
initramfs (page 181). Find more information about udev in Chapter 15, Dynamic
Kernel Device Management with udev (page 227).
5. init init handles the actual booting of the system through several different levels
providing different functionality. init is described in Section 12.2, The init Process
(page 183).
12.1.1 initramfs
initramfs is a small cpio archive that the kernel can load to a RAM disk. It provides a
minimal Linux environment that enables the execution of programs before the actual
root file system is mounted. This minimal Linux environment is loaded into memory
by BIOS routines and does not have specific hardware requirements other than sufficient
memory. initramfs must always provide an executable named init that should execute
the actual init program on the root file system for the boot process to proceed.
Before the root file system can be mounted and the operating system can be started,
the kernel needs the corresponding drivers to access the device on which the root file
system is located. These drivers may include special drivers for certain kinds of hard
drives or even network drivers to access a network file system. The needed modules
for the root file system may be loaded by init on initramfs. After the modules are loaded,
udev provides the initramfs with the needed devices. Later in the boot process, after
180
Reference
changing the root file system, it is necessary to regenerate the devices. This is done by
boot.udev with the command udevtrigger.
If you need to change hardware (e.g. hard disks) in an installed system and this hardware
requires different drivers to be present in the kernel at boot time, you must update
initramfs. This is done in the same way as with its predecessor, initrdby calling
mkinitrd. Calling mkinitrd without any argument creates an initramfs. Calling
mkinitrd -R creates an initrd. In openSUSE, the modules to load are specified
by the variable INITRD_MODULES in /etc/sysconfig/kernel. After installation, this variable is automatically set to the correct value. The modules are loaded in
exactly the order in which they appear in INITRD_MODULES. This is only important
if you rely on the correct setting of the device files /dev/sd?. However, in current
systems you also may use the device files below /dev/disk/ that are sorted in several subdirectories, named by-id, by-path and by-uuid, and always represent
the same disk. This is also possible at install time by specifying the respective mount
option.
IMPORTANT: Updating initramfs or initrd
The boot loader loads initramfs or initrd in the same way as the kernel. It is
not necessary to reinstall GRUB after updating initramfs or initrd, because GRUB
searches the directory for the right file when booting.
181
/dev. Without those special files, the file system and other devices would not be
accessible.
Managing RAID and LVM Setups
If you configured your system to hold the root file system under RAID or LVM,
init sets up LVM or RAID to enable access to the root file system later. Find information about RAID in Section 2.3, Soft RAID Configuration (page 55). Find
information about LVM in Section 2.2, LVM Configuration (page 49).
Managing Network Configuration
If you configured your system to use a network-mounted root file system (mounted
via NFS), init must make sure that the proper network drivers are loaded and that
they are set up to allow access to the root file system.
When init is called during the initial boot as part of the installation process, its tasks
differ from those mentioned earlier:
Finding the Installation Medium
As you start the installation process, your machine loads an installation kernel and
a special initrd with the YaST installer from the installation medium. The YaST
installer, which is run in a RAM file system, needs to have information about the
location of the installation medium to access it and install the operating system.
Initiating Hardware Recognition and Loading Appropriate Kernel Modules
As mentioned in Section 12.1.1, initramfs (page 180), the boot process starts with
a minimum set of drivers that can be used with most hardware configurations. init
starts an initial hardware scanning process that determines the set of drivers suitable
for your hardware configuration. The names of the modules needed for the boot
process are written to INITRD_MODULES in /etc/sysconfig/kernel.
These names are used to generate a custom initramfs that is needed to boot the
system. If the modules are not needed for boot but for coldplug, the modules are
written to /etc/sysconfig/hardware/hwconfig-*. All devices that are
described with configuration files in this directory are initialized in the boot process.
Loading the Installation System or Rescue System
As soon as the hardware has been properly recognized, the appropriate drivers have
been loaded, and udev has created the device special files, init starts the installation
system, which contains the actual YaST installer, or the rescue system.
Starting YaST
Finally, init starts YaST, which starts package installation and system configuration.
182
Reference
12.2.1 Runlevels
In Linux, runlevels define how the system is started and what services are available in
the running system. After booting, the system starts as defined in /etc/inittab in
the line initdefault. Usually this is 3 or 5. See Table 12.1, Available Runlevels
(page 183). As an alternative, the runlevel can be specified at boot time (at the boot
prompt, for instance). Any parameters that are not directly evaluated by the kernel itself
are passed to init. To boot into runlevel 3, just add a the single number 3 to the boot
prompt.
Table 12.1
Available Runlevels
Runlevel
Description
System halt
S or 1
183
Runlevel
Description
Not used
Full multiuser mode with network and X display managerKDM, GDM, or XDM
System reboot
184
Reference
185
4. The last things to start are the start scripts of the new runlevel. These are, in this
example, in /etc/init.d/rc5.d and begin with an S. The same procedure
regarding the order in which they are started is applied here.
When changing into the same runlevel as the current runlevel, init only checks /etc/
inittab for changes and starts the appropriate steps, for example, for starting a
getty on another interface. The same functionality may be achieved with the command
telinit q.
186
Option
Description
start
Start service.
Reference
Option
Description
stop
Stop service.
restart
reload
force-reload
status
187
blogd buffers all screen data until /var becomes available. Get further information
about blogd on the blogd(8) man page.
The script boot is also responsible for starting all the scripts in /etc/init.d/
boot.d with a name that starts with S. There, the file systems are checked and
loop devices are configured if needed. The system time is also set. If an error occurs
while automatically checking and repairing the file system, the system administrator
can intervene after first entering the root password. Last executed is the script
boot.local.
boot.local
Here, enter additional commands to execute at boot before changing into a runlevel.
It can be compared to AUTOEXEC.BAT on DOS systems.
boot.setup
This script is executed when changing from single user mode to any other runlevel
and is responsible for a number of basic settings, such as the keyboard layout and
initialization of the virtual consoles.
halt
This script is only executed while changing into runlevel 0 or 6. Here, it is executed
either as halt or as reboot. Whether the system shuts down or reboots depends
on how halt is called.
rc
This script calls the appropriate stop scripts of the current runlevel and the start
scripts of the newly selected runlevel.
You can create your own scripts and easily integrate them into the scheme described
above. For instructions about formatting, naming, and organizing custom scripts, refer
to the specifications of the LSB and to the man pages of init, init.d, chkconfig,
and insserv. Additionally consult the man pages of startproc and killproc.
188
Reference
FOO
$syslog $remote_fs
$syslog $remote_fs
3 5
0 1 2 6
Start FOO to allow XY and provide YZ
In the first line of the INFO block, after Provides:, specify the name of the program
or service controlled by this init script. In the Required-Start: and
Required-Stop: lines, specify all services that need to be started or stopped before
the service itself is started or stopped. This information is used later to generate the
numbering of script names, as found in the runlevel directories. After
Default-Start: and Default-Stop:, specify the runlevels in which the service
should automatically be started or stopped. Finally, for Description:, provide a
short description of the service in question.
To create the links from the runlevel directories (/etc/init.d/rc?.d/) to the
corresponding scripts in /etc/init.d/, enter the command insserv
new-script-name. The insserv program evaluates the INIT INFO header to create
the necessary links for start and stop scripts in the runlevel directories (/etc/init
.d/rc?.d/). The program also takes care of the correct start and stop order for each
runlevel by including the necessary numbers in the names of these links. If you prefer
189
a graphical tool to create such links, use the runlevel editor provided by YaST, as described in Section 12.2.3, Configuring System Services (Runlevel) with YaST
(page 190).
If a script already present in /etc/init.d/ should be integrated into the existing
runlevel scheme, create the links in the runlevel directories right away with insserv or
by enabling the corresponding service in the runlevel editor of YaST. Your changes
are applied during the next rebootthe new service is started automatically.
Do not set these links manually. If something is wrong in the INFO block, problems
will arise when insserv is run later for some other service. The manually-added
service will be removed with the next run of insserv for this script.
190
Reference
With Start, Stop, or Refresh, decide whether a service should be activated. Refresh
status checks the current status. Set or Reset lets you select whether to apply your
changes to the system or to restore the settings that existed before starting the runlevel
editor. Selecting Finish saves the changed settings to disk.
WARNING: Faulty Runlevel Settings May Damage Your System
Faulty runlevel settings may render a system unusable. Before applying your
changes, make absolutely sure that you know their consequences.
191
192
Reference
The YaST sysconfig dialog is split into three parts. The left part of the dialog shows a
tree view of all configurable variables. When you select a variable, the right part displays
both the current selection and the current setting of this variable. Below, a third window
displays a short description of the variable's purpose, possible values, the default value,
and the actual configuration file from which this variable originates. The dialog also
provides information about which configuration script is executed after changing the
variable and which new service is started as a result of the change. YaST prompts you
to confirm your changes and informs you which scripts will be executed after you leave
the dialog by selecting Finish. Also select the services and scripts to skip for now, so
they are started later. YaST applies all changes automatically and restarts any services
involved for your changes to take an effect.
193
194
Reference
13
This chapter focuses on boot management and the configuration of the boot loader
GRUB. The boot procedure as a whole is outlined in Chapter 12, Booting and Configuring a Linux System (page 179). A boot loader represents the interface between machine
(BIOS) and the operating system (openSUSE). The configuration of the boot loader
directly impacts the start of the operating system.
The following terms appear frequently in this chapter and might need some explanation:
Master Boot Record
The structure of the MBR is defined by an operating systemindependent convention. The first 446 bytes are reserved for the program code. They typically hold
part of a boot loader program or an operating system selector. The next 64 bytes
provide space for a partition table of up to four entries (see Section 2.1.1, Partition
Types (page 42)). The partition table contains information about the partitioning
of the hard disk and the file system types. The operating system needs this table
for handling the hard disk. With conventional generic code in the MBR, exactly
one partition must be marked active. The last two bytes of the MBR must contain
a static magic number (AA55). An MBR containing a different value is regarded
as invalid by some BIOSs, so is not considered for booting.
195
Boot Sectors
Boot sectors are the first sectors of hard disk partitions with the exception of the
extended partition, which merely serves as a container for other partitions. These
boot sectors have 512 bytes of space for code used to boot an operating system installed in the respective partition. This applies to boot sectors of formatted DOS,
Windows, and OS/2 partitions, which also contain some important basic data of
the file system. In contrast, the boot sectors of Linux partitions are initially empty
after setting up a file system other than XFS. Therefore, a Linux partition is not
bootable by itself, even if it contains a kernel and a valid root file system. A boot
sector with valid code for booting the system has the same magic number as the
MBR in its last two bytes (AA55).
196
Reference
drives, and DVD drives detected by the BIOS). Therefore, changes to the GRUB configuration file (menu.lst) do not require a reinstallation of the boot manager. When
the system is booted, GRUB reloads the menu file with the valid paths and partition
data of the kernel or the initial RAM disk (initrd) and locates these files.
The actual configuration of GRUB is based on three files that are described below:
/boot/grub/menu.lst
This file contains all information about partitions or operating systems that can be
booted with GRUB. Without this information, the GRUB command line prompts
the user for how to proceed (see Section Editing Menu Entries during the Boot
Procedure (page 201) for details).
/boot/grub/device.map
This file translates device names from the GRUB and BIOS notation to Linux device
names.
/etc/grub.conf
This file contains the commands, parameters, and options the GRUB shell needs
for installing the boot loader correctly.
GRUB can be controlled in various ways. Boot entries from an existing configuration
can be selected from the graphical menu (splash screen). The configuration is loaded
from the file menu.lst.
In GRUB, all boot parameters can be changed prior to booting. For example, errors
made when editing the menu file can be corrected in this way. Boot commands can also
be entered interactively at a kind of input prompt (see Section Editing Menu Entries
during the Boot Procedure (page 201)). GRUB offers the possibility of determining
the location of the kernel and the initrd prior to booting. In this way, you can even
boot an installed operating system for which no entry exists in the boot loader configuration.
GRUB actually exists in two versions: as a boot loader and as a normal Linux program
in /usr/sbin/grub. This program is referred to as the GRUB shell. It provides an
emulation of GRUB in the installed system and can be used to install GRUB or test
new settings before applying them. The functionality to install GRUB as the boot
loader on a hard disk or floppy disk is integrated in GRUB in the form of the commands
install and setup. This is available in the GRUB shell when Linux is loaded.
197
The device names in GRUB are explained in Section Naming Conventions for Hard
Disks and Partitions (page 199). This example specifies the first block of the fourth
partition of the first hard disk.
Use the command kernel to specify a kernel image. The first argument is the path to
the kernel image in a partition. The other arguments are passed to the kernel on its
command line.
If the kernel does not have built-in drivers for access to the root partition or a recent
Linux system with advanced hotplug features is used, initrd must be specified with
a separate GRUB command whose only argument is the path to the initrd file. Because the loading address of the initrd is written into the loaded kernel image, the
command initrd must follow after the kernel command.
198
Reference
The command root simplifies the specification of kernel and initrd files. The only
argument of root is a device or a partition. This device is used for all kernel, initrd,
or other file paths for which no device is explicitly specified until the next root command.
The boot command is implied at the end of every menu entry, so it does not need to
be written into the menu file. However, if you use GRUB interactively for booting, you
must enter the boot command at the end. The command itself has no arguments. It
merely boots the loaded kernel image or the specified chain loader.
After writing all menu entries, define one of them as the default entry. Otherwise,
the first one (entry 0) is used. You can also specify a time-out in seconds after which
the default entry should boot. timeout and default usually precede the menu entries.
An example file is described in Section An Example Menu File (page 200).
Being dependent on BIOS devices, GRUB does not distinguish between IDE, SATA,
SCSI, and hardware RAID devices. All hard disks recognized by the BIOS or other
controllers are numbered according to the boot sequence preset in the BIOS.
Unfortunately, it is often not possible to map the Linux device names to BIOS device
names exactly. It generates this mapping with the help of an algorithm and saves it to
the file device.map, which can be edited if necessary. Information about the file
device.map is available in Section 13.2.2, The File device.map (page 202).
The Boot Loader
199
A complete GRUB path consists of a device name written in parentheses and the path
to the file in the file system in the specified partition. The path begins with a slash. For
example, the bootable kernel could be specified as follows on a system with a single
IDE hard disk containing Linux in its first partition:
(hd0,0)/boot/vmlinuz
200
Reference
default 0
The first menu entry title linux is the one to boot by default.
timeout 8
After eight seconds without any user input, GRUB automatically boots the default
entry. To deactivate automatic boot, delete the timeout line. If you set timeout
0, GRUB boots the default entry immediately.
The second and largest block lists the various bootable operating systems. The sections
for the individual operating systems are introduced by title.
The first entry (title linux) is responsible for booting openSUSE. The kernel
(vmlinuz) is located in the first logical partition (the boot partition) of the first
hard disk. Kernel parameters, such as the root partition and VGA mode, are appended here. The root partition is specified according to the Linux naming convention
(/dev/sda7/), because this information is read by the kernel and has nothing to
do with GRUB. The initrd is also located in the first logical partition of the first
hard disk.
The second entry is responsible for loading Windows. Windows is booted from the
first partition of the first hard disk (hd0,0). The command chainloader +1
causes GRUB to read and execute the first sector of the specified partition.
The next entry enables booting from floppy disk without modifying the BIOS settings.
The boot option failsafe starts Linux with a selection of kernel parameters that
enables Linux to boot even on problematic systems.
The menu file can be changed whenever necessary. GRUB then uses the modified settings during the next boot. Edit the file permanently using YaST or an editor of your
choice. Alternatively, make temporary changes interactively using the edit function of
GRUB. See Section Editing Menu Entries during the Boot Procedure (page 201).
201
the GRUB text-based menu then press E. Changes made in this way only apply to the
current boot and are not adopted permanently.
IMPORTANT: Keyboard Layout during the Boot Procedure
The US keyboard layout is the only one available when booting. See Figure US
Keyboard Layout (Start-Up) for a figure.
Editing menu entries facilitates the repair of a defective system that can no longer be
booted, because the faulty configuration file of the boot loader can be circumvented by
manually entering parameters. Manually entering parameters during the boot procedure
is also useful for testing new settings without impairing the native system.
After activating the editing mode, use the arrow keys to select the menu entry of the
configuration to edit. To make the configuration editable, press E again. In this way,
edit incorrect partitions or path specifications before they have a negative effect on the
boot process. Press Enter to exit the editing mode and return to the menu. Then press
B to boot this entry. Further possible actions are displayed in the help text at the bottom.
To enter changed boot options permanently and pass them to the kernel, open the file
menu.lst as the user root and append the respective kernel parameters to the existing
line, separated by spaces:
title linux
kernel (hd0,0)/vmlinuz root=/dev/sda3 additional parameter
initrd (hd0,0)/initrd
GRUB automatically adopts the new parameters the next time the system is booted.
Alternatively, this change can also be made with the YaST boot loader module. Append
the new parameters to the existing line, separated by spaces.
202
Reference
(fd0)
(hd0)
(hd1)
/dev/fd0
/dev/sda
/dev/sdb
Because the order of IDE, SCSI, and other hard disks depends on various factors and
Linux is not able to identify the mapping, the sequence in the file device.map can
be set manually. If you encounter problems when booting, check if the sequence in this
file corresponds to the sequence in the BIOS and use the GRUB prompt to modify it
temporarily if necessary. After the Linux system has booted, the file device.map
can be edited permanently with the YaST boot loader module or an editor of your
choice.
After manually changing device.map, execute the following command to reinstall
GRUB. This command causes the file device.map to be reloaded and the commands
listed in grub.conf to be executed:
grub --batch < /etc/grub.conf
203
2 Paste the encrypted string into the global section of the file menu.lst:
gfxmenu (hd0,4)/message
color white/blue black/light-gray
default 0
timeout 8
password --md5 $1$lS2dv/$JOYcdxIn7CJk9xShzzJVw/
Now GRUB commands can only be executed at the boot prompt after pressing
P and entering the password. However, users can still boot all operating systems
from the boot menu.
3 To prevent one or several operating systems from being booted from the boot
menu, add the entry lock to every section in menu.lst that should not be
bootable without entering a password. For example:
title linux
kernel (hd0,4)/vmlinuz root=/dev/sda7 vga=791
initrd (hd0,4)/initrd
lock
After rebooting the system and selecting the Linux entry from the boot menu,
the following error message is displayed:
Error 32: Must be authenticated
204
Reference
Press Enter to enter the menu. Then press P to get a password prompt. After entering the password and pressing Enter, the selected operating system (Linux in
this case) should boot.
Use the Section Management tab to edit, change, and delete boot loader sections for
the individual operating systems. To add an option, click Add. To change the value of
an existing option, select it with the mouse and click Edit. To remove an existing entry,
select it and click Delete. If you are not familiar with boot loader options, read Section 13.2, Booting with GRUB (page 196) first.
205
Use the Boot Loader Installation tab to view and change settings related to type, location,
and advanced loader settings.
Access advanced configuration options from the drop-down menu that opens after you
click on Other. The build-in editor lets you change the GRUB configuration files (see
Section 13.2, Booting with GRUB (page 196) for details). You can also delete the
existing configuration and Start from Scratch or let YaST Propose a New Configuration.
It is also possible to write the configuration to disk or reread the configuration from the
disk. To restore the original Master Boot Record that was saved during the installation,
choose Restore MBR of Hard Disk.
206
Reference
207
Reference
209
/boot/vmlinuz iso/boot/
/boot/initrd iso/boot/
/boot/message iso/boot/
/usr/lib/grub/stage2_eltorito iso/boot/grub
/boot/grub/menu.lst iso/boot/grub
6 Write the resulting file grub.iso to a CD using your preferred utility. Do not
burn the ISO image as data file, but use the option for burning a CD image in
your burning utility.
Reference
option is automatically activated in accordance with the selected resolution and the
graphics card. There are three ways to disable the SUSE screen, if desired:
Disabling the SUSE Screen When Necessary
Enter the command echo 0 >/proc/splash on the command line to disable
the graphical screen. To activate it again, enter echo 1 >/proc/splash.
Disabling the SUSE screen by default.
Add the kernel parameter splash=0 to your boot loader configuration. Chapter 13,
The Boot Loader (page 195) provides more information about this. However, if you
prefer the text mode, which was the default in earlier versions, set vga=normal.
Completely Disabling the SUSE Screen
Compile a new kernel and disable the option Use splash screen instead of boot logo
in framebuffer support.
TIP
Disabling framebuffer support in the kernel automatically disables the
splash screen as well. SUSE cannot provide any support for your system if
you run it with a custom kernel.
13.7 Troubleshooting
This section lists some of the problems frequently encountered when booting with
GRUB and a short description of possible solutions. Some of the problems are covered
in articles in the Support Database at http://en.opensuse.org/SDB:SDB. Use
the search dialog to search for keywords like GRUB, boot, and boot loader.
GRUB and XFS
XFS leaves no room for stage1 in the partition boot block. Therefore, do not
specify an XFS partition as the location of the boot loader. This problem can be
solved by creating a separate boot partition that is not formatted with XFS.
GRUB Reports GRUB Geom Error
GRUB checks the geometry of connected hard disks when the system is booted.
Sometimes, the BIOS returns inconsistent information and GRUB reports a GRUB
Geom Error. If this is the case, use LILO or update the BIOS. Detailed information
211
In this example, Windows is started from the second hard disk. For this purpose,
the logical order of the hard disks is changed with map. This change does not affect
the logic within the GRUB menu file. Therefore, the second hard disk must be
specified for chainloader.
212
Reference
213
14
This chapter starts with information about various software packages, the virtual consoles, and the keyboard layout. We talk about software components like bash, cron,
and logrotate, because they were changed or enhanced during the last release cycles.
Even if they are small or considered of minor importance, users may want to change
their default behavior, because these components are often closely coupled with the
system. The chapter is finished by a section about language and country-specific settings
(I18N and L10N).
215
1. /etc/profile
2. ~/.profile
3. /etc/bash.bashrc
4. ~/.bashrc
Make custom settings in ~/.profile or ~/.bashrc. To ensure the correct processing of these files, it is necessary to copy the basic settings from /etc/skel/
.profile or /etc/skel/.bashrc into the home directory of the user. It is recommended to copy the settings from /etc/skel after an update. Execute the following
shell commands to prevent the loss of personal adjustments:
mv
cp
mv
cp
~/.bashrc ~/.bashrc.old
/etc/skel/.bashrc ~/.bashrc
~/.profile ~/.profile.old
/etc/skel/.profile ~/.profile
root
You cannot edit /etc/crontab by calling the command crontab -e. This file
must be loaded directly into an editor, modified, then saved.
A number of packages install shell scripts to the directories /etc/cron.hourly,
/etc/cron.daily, /etc/cron.weekly, and /etc/cron.monthly, whose
216
Reference
*
2
2
2
*
*
*
1
*
*
*
*
*
*
6
*
root
root
root
root
rm
rm
rm
rm
-f
-f
-f
-f
/var/spool/cron/lastrun/cron.hourly
/var/spool/cron/lastrun/cron.daily
/var/spool/cron/lastrun/cron.weekly
/var/spool/cron/lastrun/cron.monthly
217
example, such files ship with the packages, e.g. apache2 (/etc/logrotate.d/
apache2) and syslogd (/etc/logrotate.d/syslog).
Example 14.3 Example for /etc/logrotate.conf
# see "man logrotate" for details
# rotate log files weekly
weekly
# keep 4 weeks worth of backlogs
rotate 4
# create new (empty) log files after rotating old ones
create
# uncomment this if you want your log files compressed
#compress
# RPM packages drop log rotation information into this directory
include /etc/logrotate.d
# no packages own lastlog or wtmp - we'll rotate them here
#/var/log/wtmp {
#
monthly
#
create 0664 root utmp
#
rotate 1
#}
# system-specific logs may be also be configured here.
218
Reference
-m
-v
-s
-c
-a
Memory amounts must be specified in KB. For more detailed information, see man
bash.
219
220
Reference
221
emacs-el: the uncompiled library files in Emacs Lisp. These are not required at
runtime.
Numerous add-on packages can be installed if needed: emacs-auctex (for LaTeX), psgml (for SGML and XML), gnuserv (for client and server operation),
and others.
These changes only affect applications that use terminfo entries or whose configuration files are changed directly (vi, less, etc.). Applications not shipped with the
system should be adapted to these defaults.
222
Reference
Under X, the compose key (multikey) can be accessed using Ctrl + Shift (right). Also
see the corresponding entry in /etc/X11/Xmodmap.
Further settings are possible using the X Keyboard Extension (XKB). This extension
is also used by the desktop environments GNOME (gswitchit) and KDE (kxkb).
TIP: For More Information
Information about XKB is available in /etc/X11/xkb/README and the documents listed there.
Detailed information about the input of Chinese, Japanese, and Korean (CJK)
is available at Mike Fabian's page: http://www.suse.de/~mfabian/
suse-cjk/input.html.
223
RC_LANG
If none of the previous variables are set, this is the fallback. By default, only
RC_LANG is set. This makes it easier for users to enter their own values.
ROOT_USES_LANG
A yes or no variable. If it is set to no, root always works in the POSIX environment.
The variables can be set with the YaST sysconfig editor (see Section 12.3.1, Changing
the System Configuration Using the YaST sysconfig Editor (page 192)). The value of
such a variable contains the language code, country code, encoding, and modifier. The
individual components are connected by special characters:
LANG=<language>[[_<COUNTRY>].<Encoding>[@<Modifier>]]
LANG=en_US.UTF-8
This is the default setting if American English is selected during installation. If you
selected another language, that language is enabled but still with UTF-8 as the
character encoding.
224
Reference
LANG=en_US.ISO-8859-1
This sets the language to English, country to United States, and the character set
to ISO-8859-1. This character set does not support the Euro sign, but it can be
useful sometimes for programs that have not been updated to support UTF-8. The
string defining the charset (ISO-8859-1 in this case) is then evaluated by programs like Emacs.
LANG=en_IE@euro
The above example explicitly includes the Euro sign in a language setting. Strictly
speaking, this setting is obsolete now, because UTF-8 also covers the Euro symbol.
It is only useful if an application does not support UTF-8, but ISO-8859-15.
SuSEconfig reads the variables in /etc/sysconfig/language and writes the
necessary changes to /etc/SuSEconfig/profile and /etc/SuSEconfig/
csh.cshrc. /etc/SuSEconfig/profile is read or sourced by /etc/
profile. /etc/SuSEconfig/csh.cshrc is sourced by /etc/csh.cshrc.
This makes the settings available systemwide.
Users can override the system defaults by editing their ~/.bashrc accordingly. For
instance, if you do not want to use the systemwide en_US for program messages, include
LC_MESSAGES=es_ES so messages are displayed in Spanish instead.
225
A fallback chain can also be defined, for example, for Breton to French or for Galician
to Spanish to Portuguese:
LANGUAGE="br_FR:fr_FR"
LANGUAGE="gl_ES:es_ES:pt_PT"
If desired, use the Norwegian variants Nynorsk and Bokml instead (with additional
fallback to no):
LANG="nn_NO"
LANGUAGE="nn_NO:nb_NO:no"
or
LANG="nb_NO"
LANGUAGE="nb_NO:nn_NO:no"
Note that in Norwegian, LC_TIME is also treated differently.
One problem that can arise is a separator used to delimit groups of digits not being
recognized properly. This occurs if LANG is set to only a two-letter language code like
de, but the definition file glibc uses is located in /usr/share/lib/de_DE/LC
_NUMERIC. Thus LC_NUMERIC must be set to de_DE to make the separator definition
visible to the system.
226
Reference
15
The kernel can add or remove almost any device in the running system. Changes in
device state (whether a device is plugged in or removed) need to be propagated to
userspace. Devices need to be configured as soon as they are plugged in and discovered.
Users of a certain device need to be informed about any state changes of this device.
udev provides the needed infrastructure to dynamically maintain the device node files
and symbolic links in the /dev directory. udev rules provide a way to plug external
tools into the kernel device event processing. This enables you to customize udev device
handling, for example, by adding certain scripts to execute as part of kernel device
handling, or request and import additional data to evaluate during device handling.
227
Every device driver carries a list of known aliases for devices it can handle. The list is
contained in the kernel module file itself. The program depmod reads the ID lists and
creates the file modules.alias in the kernel's /lib/modules directory for all
currently available modules. With this infrastructure, module loading is as easy as
calling modprobe for every event that carries a MODALIAS key. If modprobe
228
Reference
$MODALIAS is called, it matches the device alias composed for the device with the
aliases provided by the modules. If a matching entry is found, that module is loaded.
All this is triggered by udev and happens automatically.
229
The UEVENT lines show the events the kernel has sent over netlink. The UDEV lines
show the finished udev event handlers. The timing is printed in microseconds. The time
between UEVENT and UDEV is the time udev took to process this event or the udev
daemon has delayed its execution to synchronize this event with related and already
running events. For example, events for hard disk partitions always wait for the main
disk device event to finish, because the partition events may rely on the data the main
disk event has queried from the hardware.
udevmonitor --env shows the complete event environment:
ACTION=add
DEVPATH=/devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0/input/input10
SUBSYSTEM=input
SEQNUM=1181
NAME="Logitech USB-PS/2 Optical Mouse"
PHYS="usb-0000:00:1d.2-1/input0"
UNIQ=""
EV=7
KEY=70000 0 0 0 0
REL=103
MODALIAS=input:b0003v046DpC03Ee0110-e0,1,2,k110,111,112,r0,1,8,amlsfw
udev also sends messages to syslog. The default syslog priority that controls which
messages are sent to syslog is specified in the udev configuration file /etc/udev/
udev.conf. The log priority of the running daemon can be changed with
udevcontrol log_priority=level/number.
230
Reference
The console rule consist of three keys. One match key (KERNEL), and two assign
keys (MODE, OPTIONS). The KERNEL match rule searches the device list for any items
of the type console. Only exact matches are valid and trigger this rule to be executed.
The MODE key assigns special permissions to the device node, in this case, read and
write permissions to the owner of this device and none else. The OPTIONS key makes
this rule the last rule to be applied to any device of this type. Any later rule matching
this particular device type does not have any effect.
231
The serial devices rule consists of two match keys (KERNEL and ATTRS) and
one assign key (SYMLINK). The KERNEL key searches for all devices of the ttyUSB
type. Using the * wild card, this key matches several of these devices. The second
match key, ATTRS, checks whether the product attribute file in sysfs for any
ttyUSB device contains a certain string. The assign key (SYMLINK) triggers the addition of a symbolic link to this device under /dev/pilot. The operator used in this
key (+=) tells udev to additionally perform this action, even if previous or later rules
add other symbolic links. As this rule contains two match keys, it is only applied if both
conditions are met.
The printer rule deals with USB printers and contains two match keys which must
both apply to get the entire rule applied (SUBSYSTEM and KERNEL). Three assign
keys deal with the naming for this device type (NAME), the creation of symbolic device
links (SYMLINK), and the group membership for this device type (GROUP). Using the
* wild card in the KERNEL key makes it match several lp printer devices. Substitutions
are used in both the NAME and the SYMLINK keys to extend these strings by the internal
device name. For example, the symlink to the first lp USB printer would read /dev/
usblp0.
The kernel firmware loader rule makes udev load additional firmware by an
external helper script during runtime. The SUBSYSTEM match key searches for the
firmware subsystem. The ACTION key checks whether any device belonging to the
firmware subsystem has been added. The RUN+= key triggers the execution of the
firmware.sh script to locate the firmware that is to be loaded.
Some general characteristics are common to all rules:
Each rule consists of one or more key value pairs separated by a comma.
A key's operation is determined by the operator. udev rules support several different
operators.
Each given value must be enclosed by quotation marks.
Each line of the rules file represents one rule. If a rule is longer than just one line,
use \ to join the different lines just as you would do in shell syntax.
udev rules support a shell-style pattern matching for the *, ?, and [] patterns.
udev rules support substitutions.
232
Reference
233
%k, $kernel
The value of KERNEL or the internal device name
%n, $number
The device number
%N, $tempnode
The temporary name of the device file
%M, $major
The major number of the device
%m, $minor
The minor number of the device
%s{attribute}, $attr{attribute}
The value of a sysfs attribute (specified by attribute)
%E{variable}, $attr{variable}
The value of an environment variable (specified by variable)
%c, $result
The output of PROGRAM
%%
The % character
$$
The $ character
234
Reference
DEVPATH
The device path of the event device, e.g.
DEVPATH=/bus/pci/drivers/ipw3945 to search for all events related to
the ipw3945 driver.
KERNEL
The internal (kernel) name of the event device.
SUBSYSTEM
The subsystem of the event device, e.g. SUBSYSTEM=usb for all events related
to USB devices.
ATTR{filename}
sysfs attributes of the event device. To match a string contained in the vendor
attribute file name, you could use ATTR{vendor}=="On[sS]tream", for
example.
KERNELS
Let udev search the device path upwards for a matching device name.
SUBSYSTEMS
Let udev search the device path upwards for a matching device subsystem name.
DRIVERS
Let udev search the device path upwards for a matching device driver name.
ATTRS{filename}
Let udev search the device path upwards for a device with matching sysfs attribute
values.
ENV{key}
The value of an environment variable, e.g. ENV{ID_BUS}="ieee1394 to search
for all events related to the FireWire bus ID.
PROGRAM
Let udev execute an external program. For this key to be true, the program must
return without exit code zero. The program's output is printed to stdout and available
to the RESULT key.
235
RESULT
Match the return value/string of the last PROGRAM call. Either include this key in
the same rule as the PROGRAM key or in a later one.
236
Reference
GOTO
Tell udev to skip a number of rules and continue with the one that carries the label
referenced by the GOTO key.
IMPORT{type}
Load variables into the event environment such as the output of an external program.
udev imports variables of several different types. If no type is specified, udev tries
to determine the type itself based on the executable bit of the file permissions.
program tells udev to execute an external program and import its output.
file tells udev to import a text file.
parent tells udev to import the stored keys from the parent device.
WAIT_FOR_SYSFS
Tells udev to wait for the specified sysfs file to be created for a certain device, e.g.
WAIT_FOR_SYSFS="ioerr_cnt" informs udev to wait until the ioerr_cnt
file has been created.
OPTIONS
There are several possible values to the OPTION key:
last_rule tells udev to ignore all later rules.
ignore_device tells udev to ignore this event completely.
ignore_remove tells udev to ignore all later remove event the device.
all_partitions tells udev to create device nodes for all available partitions
on a block device.
237
systems. Along with the dynamic kernel-provided device node name, udev maintains
classes of persistent symbolic links pointing to the device:
/dev/disk
|-- by-id
|
|-- scsi-SATA_HTS726060M9AT00_MRH453M4HWHG7B -> ../../sda
|
|-- scsi-SATA_HTS726060M9AT00_MRH453M4HWHG7B-part1 -> ../../sda1
|
|-- scsi-SATA_HTS726060M9AT00_MRH453M4HWHG7B-part6 -> ../../sda6
|
|-- scsi-SATA_HTS726060M9AT00_MRH453M4HWHG7B-part7 -> ../../sda7
|
|-- usb-Generic_STORAGE_DEVICE_02773 -> ../../sdd
|
`-- usb-Generic_STORAGE_DEVICE_02773-part1 -> ../../sdd1
|-- by-label
|
|-- Photos -> ../../sdd1
|
|-- SUSE10 -> ../../sda7
|
`-- devel -> ../../sda6
|-- by-path
|
|-- pci-0000:00:1f.2-scsi-0:0:0:0 -> ../../sda
|
|-- pci-0000:00:1f.2-scsi-0:0:0:0-part1 -> ../../sda1
|
|-- pci-0000:00:1f.2-scsi-0:0:0:0-part6 -> ../../sda6
|
|-- pci-0000:00:1f.2-scsi-0:0:0:0-part7 -> ../../sda7
|
|-- pci-0000:00:1f.2-scsi-1:0:0:0 -> ../../sr0
|
|-- usb-02773:0:0:2 -> ../../sdd
|
|-- usb-02773:0:0:2-part1 -> ../../sdd1
`-- by-uuid
|-- 159a47a4-e6e6-40be-a757-a629991479ae -> ../../sda7
|-- 3e999973-00c9-4917-9442-b7633bd95b9e -> ../../sda6
`-- 4210-8F8C -> ../../sdd1
238
Reference
/lib/udev/devices/*
Static /dev content
/lib/udev/*
Helper programs called from udev rules
239
16
16.1 Terminology
metadata
A file systeminternal data structure that assures all the data on disk is properly
organized and accessible. Essentially, it is data about the data. Almost every file
system has its own structure of metadata, which is part of why the file systems
show different performance characteristics. It is extremely important to maintain
metadata intact, because otherwise all data on the file system could become inaccessible.
inode
Inodes contain various information about a file, including size, number of links,
pointers to the disk blocks where the file contents are actually stored, and date and
time of creation, modification, and access.
journal
In the context of a file system, a journal is an on-disk structure containing a kind
of log in which the file system stores what it is about to change in the file system's
metadata. Journaling greatly reduces the recovery time of a Linux system because
241
it obsoletes the lengthy search process that checks the entire file system at system
start-up. Instead, only the journal is replayed.
16.2.1 ReiserFS
Officially one of the key features of the 2.4 kernel release, ReiserFS has been available
as a kernel patch for 2.2.x SUSE kernels since version 6.4. ReiserFS was designed by
Hans Reiser and the Namesys development team. It has proven itself to be a powerful
alternative to Ext2. Its key assets are better disk space utilization, better disk access
performance, and faster crash recovery.
ReiserFS's strengths, in more detail, are:
Better Disk Space Utilization
In ReiserFS, all data is organized in a structure called B*-balanced tree. The tree
structure contributes to better disk space utilization because small files can be stored
242
Reference
directly in the B* tree leaf nodes instead of being stored elsewhere and just maintaining a pointer to the actual disk location. In addition to that, storage is not allocated in chunks of 1 or 4 KB, but in portions of the exact size needed. Another
benefit lies in the dynamic allocation of inodes. This keeps the file system more
flexible than traditional file systems, like Ext2, where the inode density must be
specified at file system creation time.
Better Disk Access Performance
For small files, file data and stat_data (inode) information are often stored next
to each other. They can be read with a single disk I/O operation, meaning that only
one access to disk is required to retrieve all the information needed.
Fast Crash Recovery
Using a journal to keep track of recent metadata changes makes a file system check
a matter of seconds, even for huge file systems.
Reliability through Data Journaling
ReiserFS also supports data journaling and ordered data modes similar to the concepts outlined in the Ext3 section, Section 16.2.3, Ext3 (page 244). The default
mode is data=ordered, which ensures both data and metadata integrity, but
uses journaling only for metadata.
16.2.2 Ext2
The origins of Ext2 go back to the early days of Linux history. Its predecessor, the
Extended File System, was implemented in April 1992 and integrated in Linux 0.96c.
The Extended File System underwent a number of modifications and, as Ext2, became
the most popular Linux file system for years. With the creation of journaling file systems
and their astonishingly short recovery times, Ext2 became less important.
A brief summary of Ext2's strengths might help understand why it wasand in some
areas still isthe favorite Linux file system of many Linux users.
Solidity
Being quite an old-timer, Ext2 underwent many improvements and was heavily
tested. This may be the reason why people often refer to it as rock-solid. After a
system outage when the file system could not be cleanly unmounted, e2fsck starts
to analyze the file system data. Metadata is brought into a consistent state and
pending files or data blocks are written to a designated directory (called lost
243
+found). In contrast to journaling file systems, e2fsck analyzes the entire file
system and not just the recently modified bits of metadata. This takes significantly
longer than checking the log data of a journaling file system. Depending on file
system size, this procedure can take half an hour or more. Therefore, it is not desirable to choose Ext2 for any server that needs high availability. However, because
Ext2 does not maintain a journal and uses significantly less memory, it is sometimes
faster than other file systems.
Easy Upgradability
The code for Ext2 is the strong foundation on which Ext3 could become a highlyacclaimed next-generation file system. Its reliability and solidity were elegantly
combined with the advantages of a journaling file system.
16.2.3 Ext3
Ext3 was designed by Stephen Tweedie. Unlike all other next-generation file systems,
Ext3 does not follow a completely new design principle. It is based on Ext2. These two
file systems are very closely related to each other. An Ext3 file system can be easily
built on top of an Ext2 file system. The most important difference between Ext2 and
Ext3 is that Ext3 supports journaling. In summary, Ext3 has three major advantages to
offer:
Easy and Highly Reliable Upgrades from Ext2
Because Ext3 is based on the Ext2 code and shares its on-disk format as well as its
metadata format, upgrades from Ext2 to Ext3 are incredibly easy. Unlike transitions
to other journaling file systems, such as ReiserFS or XFS, which can be quite tedious
(making backups of the entire file system and recreating it from scratch), a transition
to Ext3 is a matter of minutes. It is also very safe, because recreating an entire file
system from scratch might not work flawlessly. Considering the number of existing
Ext2 systems that await an upgrade to a journaling file system, you can easily figure
out why Ext3 might be of some importance to many system administrators.
Downgrading from Ext3 to Ext2 is as easy as the upgrade. Just perform a clean
unmount of the Ext3 file system and remount it as an Ext2 file system.
Reliability and Performance
Some other journaling file systems follow the metadata-only journaling approach.
This means your metadata is always kept in a consistent state, but the same cannot
be automatically guaranteed for the file system data itself. Ext3 is designed to take
care of both metadata and data. The degree of care can be customized. Enabling
244
Reference
Ext3 in the data=journal mode offers maximum security (data integrity), but
can slow down the system because both metadata and data are journaled. A relatively new approach is to use the data=ordered mode, which ensures both data
and metadata integrity, but uses journaling only for metadata. The file system
driver collects all data blocks that correspond to one metadata update. These data
blocks are written to disk before the metadata is updated. As a result, consistency
is achieved for metadata and data without sacrificing performance. A third option
to use is data=writeback, which allows data to be written into the main file
system after its metadata has been committed to the journal. This option is often
considered the best in performance. It can, however, allow old data to reappear in
files after crash and recovery while internal file system integrity is maintained.
Unless you specify something else, Ext3 is run with the data=ordered default.
245
16.2.5 XFS
Originally intended as the file system for their IRIX OS, SGI started XFS development
in the early 1990s. The idea behind XFS was to create a high-performance 64-bit journaling file system to meet the extreme computing challenges of today. XFS is very
good at manipulating large files and performs well on high-end hardware. However,
even XFS has a drawback. Like ReiserFS, XFS takes great care of metadata integrity,
but less of data integrity.
A quick review of XFS's key features explains why it may prove a strong competitor
for other journaling file systems in high-end computing.
High Scalability through the Use of Allocation Groups
At the creation time of an XFS file system, the block device underlying the file
system is divided into eight or more linear regions of equal size. Those are referred
to as allocation groups. Each allocation group manages its own inodes and free
disk space. Practically, allocation groups can be seen as file systems in a file system.
Because allocation groups are rather independent of each other, more than one of
them can be addressed by the kernel simultaneously. This feature is the key to
XFS's great scalability. Naturally, the concept of independent allocation groups
suits the needs of multiprocessor systems.
High Performance through Efficient Management of Disk Space
Free space and inodes are handled by B+ trees inside the allocation groups. The
use of B+ trees greatly contributes to XFS's performance and scalability. XFS uses
delayed allocation. It handles allocation by breaking the process into two pieces.
A pending transaction is stored in RAM and the appropriate amount of space is
reserved. XFS still does not decide where exactly (speaking of file system blocks)
the data should be stored. This decision is delayed until the last possible moment.
Some short-lived temporary data may never make its way to disk, because it may
be obsolete by the time XFS decides where actually to save it. Thus XFS increases
write performance and reduces file system fragmentation. Because delayed allocation
results in less frequent write events than in other file systems, it is likely that data
loss after a crash during a write is more severe.
Preallocation to Avoid File System Fragmentation
Before writing the data to the file system, XFS reserves (preallocates) the free space
needed for a file. Thus, file system fragmentation is greatly reduced. Performance
is increased because the contents of a file are not distributed all over the file system.
246
Reference
cramfs
Compressed ROM file system: A compressed read-only file system for ROMs.
hpfs
iso9660
minix
msdos
ncpfs
nfs
smbfs
sysv
ufs
247
umsdos
vfat
ntfs
248
File System
241 (2 TB)
243 (8 TB)
241 (2 TB)
ReiserFS v3
Reference
File System
XFS
263 (8 EB)
263 (8 EB)
231 (2 GB)
263 (8 EB)
263 (8 EB)
263 (8 EB)
249
A comprehensive multipart tutorial about Linux file systems can be found at IBM developerWorks: http://www-106.ibm.com/developerworks/library/
l-fs.html. A very in-depth comparison of file systems (not only Linux file systems)
is available from the Wikipedia project http://en.wikipedia.org/wiki/
Comparison_of_file_systems#Comparison.
250
Reference
17
POSIX ACLs (access control lists) can be used as an expansion of the traditional permission concept for file system objects. With ACLs, permissions can be defined more
flexibly than the traditional permission concept allows.
The term POSIX ACL suggests that this is a true POSIX (portable operating system
interface) standard. The respective draft standards POSIX 1003.1e and POSIX 1003.2c
have been withdrawn for several reasons. Nevertheless, ACLs as found on many systems
belonging to the UNIX family are based on these drafts and the implementation of file
system ACLs as described in this chapter follows these two standards as well. They
can be viewed at http://wt.xpilot.org/publications/posix.1e/.
251
would not be able to change passwd, because it would be too dangerous to grant all
users direct access to this file. A possible solution to this problem is the setuid mechanism. setuid (set user ID) is a special file attribute that instructs the system to execute
programs marked accordingly under a specific user ID. Consider the passwd command:
-rwsr-xr-x
You can see the s that denotes that the setuid bit is set for the user permission. By
means of the setuid bit, all users starting the passwd command execute it as root.
backup
You can see the s that denotes that the setgid bit is set for the group permission. The
owner of the directory and members of the group archive may access this directory.
Users that are not members of this group are mapped to the respective group. The
effective group ID of all written files will be archive. For example, a backup program
that runs with the group ID archive is able to access this directory even without root
privileges.
252
Reference
17.3 Definitions
user class
The conventional POSIX permission concept uses three classes of users for assigning permissions in the file system: the owner, the owning group, and other users.
Three permission bits can be set for each user class, giving permission to read (r),
write (w), and execute (x).
access ACL
The user and group access permissions for all kinds of file system objects (files
and directories) are determined by means of access ACLs.
253
default ACL
Default ACLs can only be applied to directories. They determine the permissions
a file system object inherits from its parent directory when it is created.
ACL entry
Each ACL consists of a set of ACL entries. An ACL entry contains a type, a qualifier for the user or group to which the entry refers, and a set of permissions. For
some entry types, the qualifier for the group or users is undefined.
254
Reference
Table 17.1
Type
Text Form
owner
user::rwx
named user
user:name:rwx
owning group
group::rwx
named group
group:name:rwx
mask
mask::rwx
other
other::rwx
Table 17.2
Entry Type
Text Form
Permissions
named user
user:geeko:r-x
r-x
mask
mask::rw-
rw-
effective permissions:
r--
255
ACL entry owner. Other class permissions are mapped to the respective ACL entry.
However, the mapping of the group class permissions is different in the two cases.
Figure 17.1 Minimum ACL: ACL Entries Compared to Permission Bits
In the case of a minimum ACLwithout maskthe group class permissions are mapped
to the ACL entry owning group. This is shown in Figure 17.1, Minimum ACL: ACL
Entries Compared to Permission Bits (page 256). In the case of an extended ACLwith
maskthe group class permissions are mapped to the mask entry. This is shown in
Figure 17.2, Extended ACL: ACL Entries Compared to Permission Bits (page 256).
Figure 17.2 Extended ACL: ACL Entries Compared to Permission Bits
Reference
Before creating the directory, use the umask command to define which access permissions should be masked each time a file object is created. The command umask 027
sets the default permissions by giving the owner the full range of permissions (0),
denying the group write access (2), and giving other users no permissions at all (7).
umask actually masks the corresponding permission bits or turns them off. For details,
consult the umask man page.
mkdir mydir creates the mydir directory with the default permissions as set by
umask. Use ls -dl mydir to check whether all permissions were assigned correctly.
The output for this example is:
drwxr-x--- ... tux project3 ... mydir
With getfacl mydir, check the initial state of the ACL. This gives information
like:
# file: mydir
# owner: tux
# group: project3
user::rwx
group::r-x
other::---
The first three output lines display the name, owner, and owning group of the directory.
The next three lines contain the three ACL entries owner, owning group, and other. In
fact, in the case of this minimum ACL, the getfacl command does not produce any
information you could not have obtained with ls.
Modify the ACL to assign read, write, and execute permissions to an additional user
geeko and an additional group mascots with:
setfacl -m user:geeko:rwx,group:mascots:rwx mydir
The option -m prompts setfacl to modify the existing ACL. The following argument
indicates the ACL entries to modify (multiple entries are separated by commas). The
final part specifies the name of the directory to which these modifications should be
applied. Use the getfacl command to take a look at the resulting ACL.
# file: mydir
# owner: tux
# group: project3
user::rwx
user:geeko:rwx
group::r-x
group:mascots:rwx
257
mask::rwx
other::---
In addition to the entries initiated for the user geeko and the group mascots, a mask
entry has been generated. This mask entry is set automatically so that all permissions
are effective. setfacl automatically adapts existing mask entries to the settings
modified, unless you deactivate this feature with -n. mask defines the maximum effective access permissions for all entries in the group class. This includes named user,
named group, and owning group. The group class permission bits displayed by ls -dl
mydir now correspond to the mask entry.
drwxrwx---+ ... tux project3 ... mydir
The first column of the output contains an additional + to indicate that there is an extended ACL for this item.
According to the output of the ls command, the permissions for the mask entry include
write access. Traditionally, such permission bits would mean that the owning group
(here project3) also has write access to the directory mydir. However, the effective
access permissions for the owning group correspond to the overlapping portion of the
permissions defined for the owning group and for the maskwhich is r-x in our example (see Table 17.2, Masking Access Permissions (page 255)). As far as the effective
permissions of the owning group in this example are concerned, nothing has changed
even after the addition of the ACL entries.
Edit the mask entry with setfacl or chmod. For example, use chmod g-w mydir.
ls -dl mydir then shows:
drwxr-x---+ ... tux project3 ... mydir
# effective: r-x
# effective: r-x
After executing the chmod command to remove the write permission from the group
class bits, the output of the ls command is sufficient to see that the mask bits must
have changed accordingly: write permission is again limited to the owner of mydir.
258
Reference
The output of the getfacl confirms this. This output includes a comment for all those
entries in which the effective permission bits do not correspond to the original permissions, because they are filtered according to the mask entry. The original permissions
can be restored at any time with chmod g+w mydir.
259
The option -d of the setfacl command prompts setfacl to perform the following modifications (option -m) in the default ACL.
Take a closer look at the result of this command:
getfacl mydir
# file: mydir
# owner: tux
# group: project3
user::rwx
user:geeko:rwx
group::r-x
group:mascots:rwx
mask::rwx
other::--default:user::rwx
default:group::r-x
default:group:mascots:r-x
default:mask::r-x
default:other::---
getfacl returns both the access ACL and the default ACL. The default ACL is
formed by all lines that start with default. Although you merely executed the
setfacl command with an entry for the mascots group for the default ACL,
setfacl automatically copied all other entries from the access ACL to create a
valid default ACL. Default ACLs do not have an immediate effect on access permissions. They only come into play when file system objects are created. These
new objects inherit permissions only from the default ACL of their parent directory.
2. In the next example, use mkdir to create a subdirectory in mydir, which inherits
the default ACL.
mkdir mydir/mysubdir
getfacl mydir/mysubdir
# file: mydir/mysubdir
# owner: tux
# group: project3
user::rwx
group::r-x
group:mascots:r-x
mask::r-x
other::--default:user::rwx
default:group::r-x
default:group:mascots:r-x
260
Reference
default:mask::r-x
default:other::---
touch uses a mode with the value 0666 when creating new files, which means
that the files are created with read and write permissions for all user classes, provided no other restrictions exist in umask or in the default ACL (see Section
Effects of a Default ACL (page 259)). In effect, this means that all access permissions not contained in the mode value are removed from the respective ACL entries.
Although no permissions were removed from the ACL entry of the group class,
the mask entry was modified to mask permissions not set in mode.
This approach ensures the smooth interaction of applications, such as compilers,
with ACLs. You can create files with restricted access permissions and subsequently
mark them as executable. The mask mechanism guarantees that the right users
and groups can execute them as desired.
261
access is handled in accordance with the entry that best suits the process. Permissions
do not accumulate.
Things are more complicated if a process belongs to more than one group and would
potentially suit several group entries. An entry is randomly selected from the suitable
entries with the required permissions. It is irrelevant which of the entries triggers the
final result access granted. Likewise, if none of the suitable group entries contains
the required permissions, a randomly selected entry triggers the final result access
denied.
262
Reference
18
263
all PAM configuration files without requiring the administrator to update every single
PAM configuration file.
The global common PAM configuration files are maintained using the pam-config tool.
This tool automatically adds new modules to the configuration, changes the configuration
of existing ones or deletes modules or options from the configurations. Manual intervention in maintaining PAM configurations is minimized or no longer required.
PAM modules are processed as stacks. Different types of modules have different purposes, for example, one module checks the password, another one verifies the location
from which the system is accessed, and yet another one reads user-specific settings.
PAM knows about four different types of modules:
auth
The purpose of this type of module is to check the user's authenticity. This is traditionally done by querying a password, but it can also be achieved with the help of
a chip card or through biometrics (fingerprints or iris scan).
account
Modules of this type check whether the user has general permission to use the requested service. As an example, such a check should be performed to ensure that
no one can log in under the username of an expired account.
password
The purpose of this type of module is to enable the change of an authentication
token. In most cases, this is a password.
session
Modules of this type are responsible for managing and configuring user sessions.
They are started before and after authentication to register login attempts in system
264
Reference
logs and configure the user's specific environment (mail accounts, home directory,
system limits, etc.).
The second column contains control flags to influence the behavior of the modules
started:
required
A module with this flag must be successfully processed before the authentication
may proceed. After the failure of a module with the required flag, all other
modules with the same flag are processed before the user receives a message about
the failure of the authentication attempt.
requisite
Modules having this flag must also be processed successfully, in much the same
way as a module with the required flag. However, in case of failure a module
with this flag gives immediate feedback to the user and no further modules are
processed. In case of success, other modules are subsequently processed, just like
any modules with the required flag. The requisite flag can be used as a
basic filter checking for the existence of certain conditions that are essential for a
correct authentication.
sufficient
After a module with this flag has been successfully processed, the calling application
receives an immediate message about the success and no further modules are processed, provided there was no preceding failure of a module with the required
flag. The failure of a module with the sufficient flag has no direct consequences, in the sense that any subsequent modules are processed in their respective
order.
optional
The failure or success of a module with this flag does not have any direct consequences. This can be useful for modules that are only intended to display a message
(for example, to tell the user that mail has arrived) without taking any further action.
include
If this flag is given, the file specified as argument is inserted at this place.
The module path does not need to be specified explicitly, as long as the module is located in the default directory /lib/security (for all 64-bit platforms supported by
openSUSE, the directory is /lib64/security). The fourth column may contain
265
an option for the given module, such as debug (enables debugging) or nullok (allows
the use of empty passwords).
The typical PAM configuration of an application (sshd, in this case) contains four include
statements referring to the configuration files of four module types: common-auth,
common-account, common-password, and common-session. These four
files hold the default configuration for each module type. By including them instead of
calling each module separately for each PAM application, automatically get an updated
PAM configuration if the administrator changes the defaults. In former times, you had
to adjust all configuration files manually for all applications when changes to PAM
occurred or a new application was installed. Now the PAM configuration is made with
central configuration files and all changes are automatically inherited by the PAM
configuration of each service.
The first include file (common-auth) calls two modules of the auth type: pam_env
and pam_unix2. See Example 18.2, Default Configuration for the auth Section
(page 266).
Example 18.2 Default Configuration for the auth Section
auth
auth
required
required
pam_env.so
pam_unix2.so
266
Reference
variable to the correct value, because the pam_env module knows about the location
from which the login is taking place. The second one, pam_unix2, checks the user's
login and password against /etc/passwd and /etc/shadow.
After the modules specified in common-auth have been successfully called, a third
module called pam_nologin checks whether the file /etc/nologin exists. If it
does, no user other than root may log in. The whole stack of auth modules is processed before sshd gets any feedback about whether the login has succeeded. Given
that all modules of the stack have the required control flag, they must all be processed
successfully before sshd receives a message about the positive result. If one of the
modules is not successful, the entire module stack is still processed and only then is
sshd notified about the negative result.
As soon as all modules of the auth type have been successfully processed, another
include statement is processed, in this case, that in Example 18.3, Default Configuration
for the account Section (page 267). common-account contains just one module,
pam_unix2. If pam_unix2 returns the result that the user exists, sshd receives a
message announcing this success and the next stack of modules (password) is processed, shown in Example 18.4, Default Configuration for the password Section
(page 267).
Example 18.3 Default Configuration for the account Section
account required
pam_unix2.so
pam_pwcheck.so
pam_unix2.so
pam_make.so
nullok cracklib
nullok use_authtok
/var/yp
Again, the PAM configuration of sshd involves just an include statement referring to
the default configuration for password modules located in common-password.
These modules must successfully be completed (control flag required) whenever
the application requests the change of an authentication token. Changing a password
or another authentication token requires a security check. This is achieved with the pam
_pwcheck module. The pam_unix2 module used afterwards carries over any old
and new passwords from pam_pwcheck, so the user does not need to authenticate
again. This also makes it impossible to circumvent the checks carried out by pam
_pwcheck. The modules of the password type should be used wherever the preceding
modules of the account or the auth type are configured to complain about an expired
password.
267
pam_limits.so
pam_unix2.so
pam_umask.so
As the final step, the modules of the session type, bundled in the common-session
file are called to configure the session according to the settings for the user in question.
The pam_limits module loads the file /etc/security/limits.conf, which
may define limits on the use of certain system resources. The pam_unix2 module is
processed again. The pam_umask module can be used to set the file mode creation
mask. Since this module carries the optional flag, a failure of this module would
not affect the successful completion of the entire session module stack. The session
modules are called a second time when the user logs out.
268
Reference
3 Add debugging for test purposes. To make sure the new authentication procedure works as planned, turn on debugging for all PAM-related operations. The
pam-config --add --ldap-debug turns on debugging for LDAP-related
PAM operations. Find the debugging output in /var/log/messages.
4 Query your setup. Before you finally apply your new PAM setup, check
whether it contains all the options you planned to add. The pam-config
--query --module lists both the type and the options for the queried PAM
module.
5 Remove the debug options. Finally, remove the debug option from your setup
when you are entirely satisfied with the performance of it. The pam-config
--delete --ldap-debug turns of debugging for LDAP authentication. In
case you had debugging options added for other modules, use similar commands
to turn these off.
When you create your PAM configuration files from scratch using the pam-config
--create command, it creates symbolic links from the common-* to the
common-*-pc files. pam-config only modifies the common-*-pc configuration
files. Removing these symbolic links effectively disable pam-config, because pamconfig only operates on the common-*-pc files and these files are not put into effect
without the symbolic links.
For more information on the pam-config command and the options available, refer
to the manual page of pam-config, pam-config(8).
269
security aspects of PAM. The document is available as a PDF file, in HTML format,
and as plain text.
The Linux-PAM Module Writers' Manual
This document summarizes the topic from the developer's point of view, with information about how to write standard-compliant PAM modules. It is available as
a PDF file, in HTML format, and as plain text.
The Linux-PAM Application Developers' Guide
This document includes everything needed by an application developer who wants
to use the PAM libraries. It is available as a PDF file, in HTML format, and as
plain text.
The PAM Manual Pages
PAM in general as well as the individual modules come with manual pages that
provide a good overview of the functionality provided by the respective component.
Thorsten Kukuk has developed a number of PAM modules and made some information
available about them at http://www.suse.de/~kukuk/pam/.
270
Reference
19
Although graphical user interfaces have become very important and user-friendly, using
them is not the only way to communicate with your system. A command line interpreter,
in Unix/Linux called a shell, provides a highly flexible and efficient means for textoriented communication with your system.
In administration, shell-based applications are especially important for controlling
computers over slow network links or if you want to perform tasks as administrator on
the command line.
This chapter deals with a couple of basics you need to know for making efficient use
of the command line: the directory structure of Linux, the user and permission concept
of Linux, an overview of important shell commands, and a short introduction to the vi
editor, which is a default editor always available in Unix and Linux systems.
271
logged in as system administrator, root, Bash indicates this with a hash symbol, #.
Directly after login, the current directory is usually the home directory of the user account
with which you have logged in, indicated by the tilde symbol, ~. When you are logged
in on a remote computer the information provided by the prompt always shows you
which system you are currently working on. You can now enter commands and execute
tasks. To log out from the shell, enter exit and press Alt + F7 to switch back to the
graphical user interface. You will find your desktop and the applications running on it
unchanged.
To start a terminal window within the graphical user interface in KDE or GNOME press
Alt + F2 and enter xterm (or click the Konsole or GNOME terminal icon in the panel).
This opens a terminal window on your desktop. As you are already logged in to your
desktop the prompt shows the usual login and path information. You can now enter
commands and execute tasks just like in any shell which runs parallel to your desktop.
To close the terminal window press Alt + F4 .
The Konsole or the GNOME Terminal window appears, displaying the prompt at the
first line, see Figure 19.1, Example of a Bash Terminal Window (page 272). The
prompt usually shows your login name (in this example, tux), the hostname of your
computer (here, knox), and the current path (in this case, your home directory, indicated
by the tilde symbol, ~). When you are logged in on a remote computer this information
always shows you which system you are currently working on. When the cursor placed
behind this prompt, you can send commands directly to your computer system.
Figure 19.1 Example of a Bash Terminal Window
Because the shell does not offer a graphical overview of directories and files like the
tree view in a file manager, it is useful to have some basic knowledge of the default
directory structure in Linux.
272
Reference
Directory
Contents
/bin
/boot
/dev
/etc
/home
/lib
/media
/mnt
/opt
/root
/sbin
/srv
273
Directory
Contents
/tmp
Temporary files.
/usr
/var
/windows
The following list provides more detailed information and gives some examples which
files and subdirectories can be found in the directories:
/bin
Contains the basic shell commands that may be used both by root and by other
users. These commands include ls, mkdir, cp, mv, rm, and rmdir. /bin also
contains Bash, the default shell in openSUSE.
/boot
Contains data required for booting, such as the boot loader, the kernel, and other
data that is used before the kernel begins executing user mode programs.
/dev
Holds device files that represent hardware components.
/etc
Contains local configuration files that control the operation of programs like the
X Window System. The /etc/init.d subdirectory contains scripts that are
executed during the boot process.
/home/username
Holds the private data of every user who has an account on the system. The files
located here can only be modified by their owner or by the system administrator.
By default, your e-mail directory and personal desktop configuration are located
here in form of hidden files and directories. KDE users find the personal configuration data for their desktop in .kde, GNOME users find it in .gconf. For information about hidden files, refer to Section Key Features (Chapter 7, Basic Concepts, Start-Up).
274
Reference
275
/usr
/usr has nothing to do with users, but is the acronym for UNIX system resources.
The data in /usr is static, read-only data that can be shared among various hosts
compliant to the Filesystem Hierarchy Standard (FHS). This directory contains all
application programs and establishes a secondary hierarchy in the file system. /usr
holds a number of subdirectories, such as /usr/bin, /usr/sbin, /usr/
local, and /usr/share/doc.
/usr/bin
Contains generally accessible programs.
/usr/sbin
Contains programs reserved for the system administrator, such as repair functions.
/usr/local
In this directory, the system administrator can install local, distribution-independent
extensions.
/usr/share/doc
Holds various documentation files and the release notes for your system. In the
manual subdirectory, find an online version of this manual. If more than one
language is installed, this directory may contain versions of the manuals for different
languages.
Under packages, find the documentation included in the software packages installed on your system. For every package, a subdirectory /usr/share/doc/
packages/packagename is created that often holds README files for the
package and sometimes examples, configuration files, or additional scripts.
If HOWTOs are installed on your system /usr/share/doc also holds the
howto subdirectory in which to find additional documentation on many tasks relating to the setup and operation of Linux software.
/var
Whereas /usr holds static, read-only data, /var is for data which is written during
system operation and thus is variable data, such as log files or spooling data. For
example, the log files of your system are in /var/log/messages (only accessible for root).
276
Reference
/windows
Only available if you have both Microsoft Windows and Linux installed on your
system. Contains the Windows data available on the Windows partition of your
system. Whether you can edit the data in this directory depends on the file system
your Windows partition uses. If it is FAT32, you can open and edit the files in this
directory. For an NTFS file system, however, you can only read your Windows
files from Linux, but not modify them. Learn more in Section Accessing Files on
Different OS on the Same Computer (Chapter 11, Copying and Sharing Files,
Start-Up).
277
File Access
The organization of permissions in the file system differs for files and directories.
File permission information can be displayed with the command ls -l. The
output could appear as in Example 19.1, Sample Output Showing File Permissions
(page 278).
Example 19.1 Sample Output Showing File Permissions
-rw-r----- 1 tux project3 14197 Jun 21
15:03 Roadmap
As shown in the third column, this file belongs to user tux. It is assigned to the
group project3. To discover the user permissions of the Roadmap file, the first
column must be examined more closely.
-
rw-
r--
---
Type
Users Permissions
This column consists of one leading character followed by nine characters grouped
in threes. The first of the ten letters stands for the type of file system component.
The hyphen () shows that this is a file. A directory (d), a link (l), a block device
(b), or a character device could also be indicated.
The next three blocks follow a standard pattern. The first three characters refer to
whether the file is readable (r) or not (). A w in the middle portion symbolizes
that the corresponding object can be edited and a hyphen () means it is not possible
to write to the file. An x in the third position denotes that the object can be executed.
Because the file in this example is a text file and not one that is executable, executable access for this particular file is not needed.
In this example, tux has, as owner of the file Roadmap, read (r) and write access
(w) to it, but cannot execute it (x). The members of the group project3 can read
the file, but they cannot modify it or execute it. Other users do not have any access
to this file. Other permissions can be assigned by means of ACLs (access control
lists).
Directory Permissions
Access permissions for directories have the type d. For directories, the individual
permissions have a slightly different meaning.
278
Reference
ProjectData
In Example 19.2, Sample Output Showing Directory Permissions (page 279), the
owner (tux) and the owning group (project3) of the directory ProjectData
are easy to recognize. In contrast to the file access permissions from File Access
(page 278), the set reading permission (r) means that the contents of the directory
can be shown. The write permission (w) means that new files can be created. The
executable permission (x) means that the user can change to this directory. In the
above example, the user tux as well as the members of the group project3 can
change to the ProjectData directory (x), view the contents (r), and add or
delete files (w). The rest of the users, on the other hand, are given less access. They
may enter the directory (x) and browse through it (r), but not insert any new files
(w).
279
xexecute
4. Filename or filenames separated by spaces
If, for example, the user tux in Example 19.2, Sample Output Showing Directory
Permissions (page 279) also wants to grant other users write (w) access to the directory ProjectData, he can do this using the command chmod o+w
ProjectData.
If, however, he wants to deny all users other than himself write permissions, he
can do this by entering the command chmod go-w ProjectData. To prohibit
all users from adding a new file to the folder ProjectData, enter chmod -w
ProjectData. Now, not even the owner can create a new file in the directory
without first reestablishing write permissions.
Changing Ownership Permissions
Other important commands to control the ownership and permissions of the file
system components are chown (change owner) and chgrp (change group). The
command chown can be used to transfer ownership of a file to another user.
However, only root is permitted to perform this change.
Suppose the file Roadmap from Example 19.2, Sample Output Showing Directory
Permissions (page 279) should no longer belong to tux, but to the user geeko.
root should then enter chown geeko Roadmap.
chgrp changes the group ownership of the file. However, the owner of the file
must be a member of the new group. In this way, the user tux from Example 19.1,
Sample Output Showing File Permissions (page 278) can switch the group owning
the file ProjectData to project4 with the command chgrp project4
ProjectData, as long as he is a member of this new group.
280
Reference
In the man pages, move up and down with PgUp and PgDn. Move between the beginning
and the end of a document with Home and End. End this viewing mode by pressing Q.
Learn more about the man command itself with man man.
In the following overview, the individual command elements are written in different
typefaces. The actual command and its mandatory options are always printed as
command option. Specifications or parameters that are not required are placed in
[square brackets].
Adjust the settings to your needs. It makes no sense to write ls file if no file named
file actually exists. You can usually combine several parameters, for example, by
writing ls -la instead of ls -l -a.
File Administration
ls [options] [files]
If you run ls without any additional parameters, the program lists the contents of
the current directory in short form.
-l
Detailed list
-a
Displays hidden files
cp [options] source target
Copies source to target.
-i
Waits for confirmation, if necessary, before an existing target is overwritten
-r
Copies recursively (includes subdirectories)
281
282
Reference
-R
Changes files and directories in all subdirectories
chgrp [options] groupname files
Transfers the group ownership of a given file to the group with the specified
group name. The file owner can only change group ownership if a member of both
the current and the new group.
chmod [options] mode files
Changes the access permissions.
The mode parameter has three parts: group, access, and access type.
group accepts the following characters:
u
User
g
Group
o
Others
For access, grant access with + and deny it with -.
The access type is controlled by the following options:
r
Read
w
Write
x
Executeexecuting files or changing to the directory
s
Setuid bitthe application or program is started as if it were started by the
owner of the file
283
As an alternative, a numeric code can be used. The four digits of this code are
composed of the sum of the values 4, 2, and 1the decimal result of a binary mask.
The first digit sets the set user ID (SUID) (4), the set group ID (2), and the sticky
(1) bits. The second digit defines the permissions of the owner of the file. The third
digit defines the permissions of the group members and the last digit sets the permissions for all other users. The read permission is set with 4, the write permission
with 2, and the permission for executing a file is set with 1. The owner of a file
would usually receive a 6 or a 7 for executable files.
gzip [parameters] files
This program compresses the contents of files using complex mathematical algorithms. Files compressed in this way are given the extension .gz and need to be
uncompressed before they can be used. To compress several files or even entire
directories, use the tar command.
-d
Decompresses the packed gzip files so they return to their original size and
can be processed normally (like the command gunzip)
tar options archive files
tar puts one or more files into an archive. Compression is optional. tar is a quite
complex command with a number of options available. The most frequently used
options are:
-f
Writes the output to a file and not to the screen as is usually the case
-c
Creates a new tar archive
-r
Adds files to an existing archive
-t
Outputs the contents of an archive
-u
Adds files, but only if they are newer than the files already contained in the
archive
284
Reference
-x
Unpacks files from an archive (extraction)
-z
Packs the resulting archive with gzip
-j
Compresses the resulting archive with bzip2
-v
Lists files processed
The archive files created by tar end with .tar. If the tar archive was also compressed using gzip, the ending is .tgz or .tar.gz. If it was compressed using
bzip2, the ending is .tar.bz2.
locate patterns
This command is only available if you have installed the findutils-locate
package. The locate command can find in which directory a specified file is located. If desired, use wild cards to specify filenames. The program is very speedy,
because it uses a database specifically created for the purpose (rather than searching
through the entire file system). This very fact, however, also results in a major
drawback: locate is unable to find any files created after the latest update of its
database. The database can be generated by root with updatedb.
updatedb [options]
This command performs an update of the database used by locate. To include
files in all existing directories, run the program as root. It also makes sense to
place it in the background by appending an ampersand (&), so you can immediately
continue working on the same command line (updatedb &). This command
usually runs as a daily cron job (see cron.daily).
find [options]
With find, search for a file in a given directory. The first argument specifies the
directory in which to start the search. The option -name must be followed by a
search string, which may also include wild cards. Unlike locate, which uses a
database, find scans the actual directory.
285
286
Reference
File Systems
mount [options] [device] mountpoint
This command can be used to mount any data media, such as hard disks, CD-ROM
drives, and other drives, to a directory of the Linux file system.
-r
Mount read-only
-t filesystem
Specify the file system, commonly ext2 for Linux hard disks, msdos for
MS-DOS media, vfat for the Windows file system, and iso9660 for CDs
For hard disks not defined in the file /etc/fstab, the device type must also be
specified. In this case, only root can mount it. If the file system should also be
mounted by other users, enter the option user in the appropriate line in the /etc/
fstab file (separated by commas) and save this change. Further information is
available in the mount(1) man page.
umount [options] mountpoint
This command unmounts a mounted drive from the file system. To prevent data
loss, run this command before taking a removable data medium from its drive.
Normally, only root is allowed to run the commands mount and umount. To
enable other users to run these commands, edit the /etc/fstab file to specify
the option user for the respective drive.
287
System Information
df [options] [directory]
The df (disk free) command, when used without any options, displays information
about the total disk space, the disk space currently in use, and the free space on all
the mounted drives. If a directory is specified, the information is limited to the
drive on which that directory is located.
-h
Shows the number of occupied blocks in gigabytes, megabytes, or kilobytesin
human-readable format
-T
Type of file system (ext2, nfs, etc.)
du [options] [path]
This command, when executed without any parameters, shows the total disk space
occupied by files and subdirectories in the current directory.
-a
Displays the size of each individual file
-h
Output in human-readable form
-s
Displays only the calculated total size
free [options]
The command free displays information about RAM and swap space usage,
showing the total and the used amount in both categories. See Section 14.1.6, The
free Command (page 220) for more information.
-b
Output in bytes
288
Reference
-k
Output in kilobytes
-m
Output in megabytes
date [options]
This simple program displays the current system time. If run as root, it can also
be used to change the system time. Details about the program are available in the
date(1) man page.
Processes
top [options]
top provides a quick overview of the currently running processes. Press H to access
a page that briefly explains the main options for customizing the program.
ps [options] [process ID]
If run without any options, this command displays a table of all your own programs
or processesthose you started. The options for this command are not preceded
by hyphen.
aux
Displays a detailed list of all processes, independent of the owner
kill [options] process ID
Unfortunately, sometimes a program cannot be terminated in the normal way. In
most cases, you should still be able to stop such a runaway program by executing
the kill command, specifying the respective process ID (see top and ps). kill
sends a TERM signal that instructs the program to shut itself down. If this does not
help, the following parameter can be used:
-9
Sends a KILL signal instead of a TERM signal, bringing the specified process
to an end in almost all cases
killall [options] processname
This command is similar to kill, but uses the process name (instead of the process
ID) as an argument, killing all processes with that name.
289
Network
ping [options] hostname or IP address
The ping command is the standard tool for testing the basic functionality of TCP/IP
networks. It sends a small data packet to the destination host, requesting an immediate reply. If this works, ping displays a message to that effect, which indicates
that the network link is basically functioning.
-c number
Determines the total number of packages to send and ends after they have been
dispatched (by default, there is no limitation set)
-f
flood ping: sends as many data packages as possible; a popular means, reserved
for root, to test networks
-i value
Specifies the interval between two data packages in seconds (default: one
second)
nslookup
The domain name system resolves domain names to IP addresses. With this tool,
send queries to name servers (DNS servers).
telnet [options] hostname or IP address [port]
Telnet is actually an Internet protocol that enables you to work on remote hosts
across a network. telnet is also the name of a Linux program that uses this protocol
to enable operations on remote computers.
WARNING
Do not use telnet over a network on which third parties can eavesdrop.
Particularly on the Internet, use encrypted transfer methods, such as ssh,
to avoid the risk of malicious misuse of a password (see the man page for
ssh).
290
Reference
Miscellaneous
passwd [options] [username]
Users may change their own passwords at any time using this command. The administrator root can use the command to change the password of any user on the
system.
su [options] [username]
The su command makes it possible to log in under a different username from a
running session. Specify a username and the corresponding password. The password
is not required from root, because root is authorized to assume the identity of
any user. When using the command without specifying a username, you are
prompted for the root password and change to the superuser (root). Use su to start a login shell for a different user.
halt [options]
To avoid loss of data, you should always use this program to shut down your system.
reboot [options]
Does the same as halt except the system performs an immediate reboot.
clear
This command cleans up the visible area of the console. It has no options.
291
292
Reference
1. Exit without saving: To terminate the editor without saving the changes, enter :
Q ! in command mode. The exclamation mark (!) causes vi to ignore any changes.
2. Save and exit: There are several possibilities to save your changes and terminate
the editor. In command mode, use Shift + Z Shift + Z. To exit the program saving
all changes using the extended mode, enter : W Q. In extended mode, w stands
for write and q for quit.
19.4.2 vi in Action
vi can be used as a normal editor. In insert mode, enter text and delete text with the <
and Del keys. Use the arrow keys to move the cursor.
However, these control keys often cause problems, because there are many terminal
types that use special key codes. This is where the command mode comes into play.
Press Esc to switch from insert mode to command mode. In command mode, move the
cursor with H, J, K, and L. The keys have the following functions:
H
293
Table 19.2
294
Esc
Shift + A
Shift + R
Shift + O
Change to insert mode (a new line is inserted before the current one)
DD
DW
CW
Change to insert mode (the rest of the current word is overwritten by the next entries you make)
Ctrl + R
Shift + J
Reference
295
Basic Networking
20
Linux offers the necessary networking tools and features for integration into all types
of network structures. The customary Linux protocol, TCP/IP, has various services and
special features, which are discussed here. Network access using a network card, modem,
or other device can be configured with YaST. Manual configuration is also possible.
Only the fundamental mechanisms and the relevant network configuration files are
discussed in this chapter.
Linux and other Unix operating systems use the TCP/IP protocol. It is not a single
network protocol, but a family of network protocols that offer various services. The
protocols listed in Table 20.1, Several Protocols in the TCP/IP Protocol Family
(page 300) are provided for the purpose of exchanging data between two machines via
TCP/IP. Networks combined by TCP/IP, comprising a worldwide network are also referred to, in their entirety, as the Internet.
RFC stands for Request for Comments. RFCs are documents that describe various Internet protocols and implementation procedures for the operating system and its applications. The RFC documents describe the setup of Internet protocols. To expand your
knowledge about any of the protocols, refer to the appropriate RFC documents. They
are available online at http://www.ietf.org/rfc.html.
Basic Networking
299
Table 20.1
Protocol
Description
TCP
UDP
ICMP
IGMP
As shown in Figure 20.1, Simplified Layer Model for TCP/IP (page 301), data exchange takes place in different layers. The actual network layer is the insecure data
transfer via IP (Internet protocol). On top of IP, TCP (transmission control protocol)
guarantees, to a certain extent, security of the data transfer. The IP layer is supported
by the underlying hardware-dependent protocol, such as ethernet.
300
Reference
Host earth
Application Layer
Applications
Application Layer
Transport Layer
TCP, UDP
Transport Layer
Network Layer
IP
Network Layer
Physical Layer
Cable, Fiberglass
Physical Layer
Data Transfer
The diagram provides one or two examples for each layer. The layers are ordered according to abstraction levels. The lowest layer is very close to the hardware. The uppermost layer, however, is almost a complete abstraction from the hardware. Every layer
has its own special function. The special functions of each layer are mostly implicit in
their description. The data link and physical layers represent the physical network used,
such as ethernet.
Almost all hardware protocols work on a packet-oriented basis. The data to transmit is
packaged in packets, because it cannot be sent all at once. The maximum size of a
TCP/IP packet is approximately 64 KB. Packets are normally quite a bit smaller, because
the network hardware can be a limiting factor. The maximum size of a data packet on
an ethernet is about fifteen hundred bytes. The size of a TCP/IP packet is limited to this
amount when the data is sent over an ethernet. If more data is transferred, more data
packets need to be sent by the operating system.
For the layers to serve their designated functions, additional information regarding each
layer must be saved in the data packet. This takes place in the header of the packet.
Every layer attaches a small block of data, called the protocol header, to the front of
each emerging packet. A sample TCP/IP data packet traveling over an ethernet cable
is illustrated in Figure 20.2, TCP/IP Ethernet Packet (page 302). The proof sum is
Basic Networking
301
located at the end of the packet, not at the beginning. This simplifies things for the
network hardware.
Figure 20.2 TCP/IP Ethernet Packet
When an application sends data over the network, the data passes through each layer,
all implemented in the Linux kernel except the physical layer. Each layer is responsible
for preparing the data so it can be passed to the next layer. The lowest layer is ultimately
responsible for sending the data. The entire procedure is reversed when data is received.
Like the layers of an onion, in each layer the protocol headers are removed from the
transported data. Finally, the transport layer is responsible for making the data available
for use by the applications at the destination. In this manner, one layer only communicates with the layer directly above or below it. For applications, it is irrelevant whether
data is transmitted via a 100 Mbit/s FDDI network or via a 56-Kbit/s modem line.
Likewise, it is irrelevant for the data line which kind of data is transmitted, as long as
packets are in the correct format.
302
Reference
20.1.1 IP Addresses
Every computer on the Internet has a unique 32-bit address. These 32 bits (or 4 bytes)
are normally written as illustrated in the second row in Example 20.1, Writing IP
Addresses (page 303).
Example 20.1 Writing IP Addresses
IP Address (binary): 11000000 10101000 00000000 00010100
IP Address (decimal):
192.
168.
0.
20
In decimal form, the four bytes are written in the decimal number system, separated by
periods. The IP address is assigned to a host or a network interface. It cannot be used
anywhere else in the world. There are exceptions to this rule, but these are not relevant
in the following passages.
The points in IP addresses indicate the hierarchical system. Until the 1990s, IP addresses
were strictly categorized in classes. However, this system has proven too inflexible and
was discontinued. Now, classless routing (CIDR, classless interdomain routing) is used.
Basic Networking
303
To give another example: all machines connected with the same ethernet cable are
usually located in the same subnetwork and are directly accessible. Even when the
subnet is physically divided by switches or bridges, these hosts can still be reached directly.
IP addresses outside the local subnet can only be reached if a gateway is configured
for the target network. In the most common case, there is only one gateway that handles
all traffic that is external. However, it is also possible to configure several gateways
for different subnets.
If a gateway has been configured, all external IP packets are sent to the appropriate
gateway. This gateway then attempts to forward the packets in the same mannerfrom
host to hostuntil it reaches the destination host or the packet's TTL (time to live) expires.
Table 20.2
304
Specific Addresses
Address Type
Description
Broadcast Address
Reference
Address Type
Description
Local Host
Because IP addresses must be unique all over the world, you cannot just select random
addresses. There are three address domains to use if you want to set up a private IPbased network. These cannot get any connection from the rest of the Internet, because
they cannot be transmitted over the Internet. These address domains are specified in
RFC 1597 and listed in Table 20.3, Private IP Address Domains (page 305).
Table 20.3
Network/Netmask
Domain
10.0.0.0/255.0.0.0
10.x.x.x
172.16.0.0/255.240.0.0
172.16.x.x 172.31.x.x
192.168.0.0/255.255.0.0
192.168.x.x
Basic Networking
305
from which only 254 are usable, because two IP addresses are needed for the structure
of the subnetwork itself: the broadcast and the base network address.
Under the current IPv4 protocol, DHCP or NAT (network address translation) are the
typical mechanisms used to circumvent the potential address shortage. Combined with
the convention to keep private and public address spaces separate, these methods can
certainly mitigate the shortage. The problem with them lies in their configuration, which
is a chore to set up and a burden to maintain. To set up a host in an IPv4 network, you
need a number of address items, such as the host's own IP address, the subnetmask, the
gateway address, and maybe a name server address. All these items need to be known
and cannot be derived from somewhere else.
With IPv6, both the address shortage and the complicated configuration should be a
thing of the past. The following sections tell more about the improvements and benefits
brought by IPv6 and about the transition from the old protocol to the new one.
20.2.1 Advantages
The most important and most visible improvement brought by the new protocol is the
enormous expansion of the available address space. An IPv6 address is made up of 128
bit values instead of the traditional 32 bits. This provides for as many as several
quadrillion IP addresses.
However, IPv6 addresses are not only different from their predecessors with regard to
their length. They also have a different internal structure that may contain more specific
information about the systems and the networks to which they belong. More details
about this are found in Section 20.2.2, Address Types and Structure (page 308).
The following is a list of some other advantages of the new protocol:
Autoconfiguration
IPv6 makes the network plug and play capable, which means that a newly set up
system integrates into the (local) network without any manual configuration. The
new host uses its automatic configuration mechanism to derive its own address
from the information made available by the neighboring routers, relying on a protocol called the neighbor discovery (ND) protocol. This method does not require
any intervention on the administrator's part and there is no need to maintain a central
server for address allocationan additional advantage over IPv4, where automatic
address allocation requires a DHCP server.
306
Reference
Mobility
IPv6 makes it possible to assign several addresses to one network interface at the
same time. This allows users to access several networks easily, something that
could be compared with the international roaming services offered by mobile phone
companies: when you take your mobile phone abroad, the phone automatically
logs in to a foreign service as soon as it enters the corresponding area, so you can
be reached under the same number everywhere and are able to place an outgoing
call just like in your home area.
Secure Communication
With IPv4, network security is an add-on function. IPv6 includes IPsec as one of
its core features, allowing systems to communicate over a secure tunnel to avoid
eavesdropping by outsiders on the Internet.
Backward Compatibility
Realistically, it would be impossible to switch the entire Internet from IPv4 to IPv6
at one time. Therefore, it is crucial that both protocols are able to coexist not only
on the Internet, but also on one system. This is ensured by compatible addresses
(IPv4 addresses can easily be translated into IPv6 addresses) and through the use
of a number of tunnels. See Section 20.2.3, Coexistence of IPv4 and IPv6
(page 312). Also, systems can rely on a dual stack IP technique to support both
protocols at the same time, meaning that they have two network stacks that are
completely separate, such that there is no interference between the two protocol
versions.
Custom Tailored Services through Multicasting
With IPv4, some services, such as SMB, need to broadcast their packets to all hosts
in the local network. IPv6 allows a much more fine-grained approach by enabling
servers to address hosts through multicastingby addressing a number of hosts as
parts of a group (which is different from addressing all hosts through broadcasting
or each host individually through unicasting). Which hosts are addressed as a group
may depend on the concrete application. There are some predefined groups to address all name servers (the all name servers multicast group), for example, or all
routers (the all routers multicast group).
Basic Networking
307
Reference
shorthand notation is shown in Example 20.3, Sample IPv6 Address (page 309), where
all three lines represent the same address.
Example 20.3 Sample IPv6 Address
fe80 : 0000 : 0000 : 0000 : 0000 : 10 : 1000 : 1a4
fe80 :
0 :
0 :
0 :
0 : 10 : 1000 : 1a4
fe80 :
: 10 : 1000 : 1a4
Each part of an IPv6 address has a defined function. The first bytes form the prefix and
specify the type of address. The center part is the network portion of the address, but
it may be unused. The end of the address forms the host part. With IPv6, the netmask
is defined by indicating the length of the prefix after a slash at the end of the address.
An address, as shown in Example 20.4, IPv6 Address Specifying the Prefix Length
(page 309), contains the information that the first 64 bits form the network part of the
address and the last 64 form its host part. In other words, the 64 means that the netmask
is filled with 64 1-bit values from the left. Just like with IPv4, the IP address is combined
with AND with the values from the netmask to determine whether the host is located
in the same subnetwork or in another one.
Example 20.4 IPv6 Address Specifying the Prefix Length
fe80::10:1000:1a4/64
IPv6 knows about several predefined types of prefixes. Some of these are shown in
Table 20.4, Various IPv6 Prefixes (page 309).
Table 20.4
Prefix (hex)
Definition
00
2 or 3 as the first
digit
Basic Networking
309
Prefix (hex)
Definition
fe80::/10
fec0::/10
ff
Reference
::1 (loopback)
The address of the loopback device.
IPv4 Compatible Addresses
The IPv6 address is formed by the IPv4 address and a prefix consisting of 96 zero
bits. This type of compatibility address is used for tunneling (see Section 20.2.3,
Coexistence of IPv4 and IPv6 (page 312)) to allow IPv4 and IPv6 hosts to communicate with others operating in a pure IPv4 environment.
IPv4 Addresses Mapped to IPv6
This type of address specifies a pure IPv4 address in IPv6 notation.
Local Addresses
There are two address types for local use:
link-local
This type of address can only be used in the local subnetwork. Packets with a
source or target address of this type should not be routed to the Internet or
other subnetworks. These addresses contain a special prefix (fe80::/10)
and the interface ID of the network card, with the middle part consisting of
zero bytes. Addresses of this type are used during automatic configuration to
communicate with other hosts belonging to the same subnetwork.
site-local
Packets with this type of address may be routed to other subnetworks, but not
to the wider Internetthey must remain inside the organization's own network.
Such addresses are used for intranets and are an equivalent of the private address
space defined by IPv4. They contain a special prefix (fec0::/10), the interface ID, and a 16 bit field specifying the subnetwork ID. Again, the rest is
filled with zero bytes.
As a completely new feature introduced with IPv6, each network interface normally
gets several IP addresses, with the advantage that several networks can be accessed
through the same interface. One of these networks can be configured completely automatically using the MAC and a known prefix with the result that all hosts on the local
network can be reached as soon as IPv6 is enabled (using the link-local address). With
the MAC forming part of it, any IP address used in the world is unique. The only variable
parts of the address are those specifying the site topology and the public topology, depending on the actual network in which the host is currently operating.
Basic Networking
311
For a host to go back and forth between different networks, it needs at least two addresses. One of them, the home address, not only contains the interface ID but also an identifier of the home network to which it normally belongs (and the corresponding prefix).
The home address is a static address and, as such, it does not normally change. Still,
all packets destined to the mobile host can be delivered to it, regardless of whether it
operates in the home network or somewhere outside. This is made possible by the
completely new features introduced with IPv6, such as stateless autoconfiguration and
neighbor discovery. In addition to its home address, a mobile host gets one or more
additional addresses that belong to the foreign networks where it is roaming. These are
called care-of addresses. The home network has a facility that forwards any packets
destined to the host when it is roaming outside. In an IPv6 environment, this task is
performed by the home agent, which takes all packets destined to the home address and
relays them through a tunnel. On the other hand, those packets destined to the care-of
address are directly transferred to the mobile host without any special detours.
312
Reference
6over4
IPv6 packets are automatically encapsulated as IPv4 packets and sent over an IPv4
network capable of multicasting. IPv6 is tricked into seeing the whole network
(Internet) as a huge local area network (LAN). This makes it possible to determine
the receiving end of the IPv4 tunnel automatically. However, this method does not
scale very well and is also hampered by the fact that IP multicasting is far from
widespread on the Internet. Therefore, it only provides a solution for smaller corporate or institutional networks where multicasting can be enabled. The specifications for this method are laid down in RFC 2529.
6to4
With this method, IPv4 addresses are automatically generated from IPv6 addresses,
enabling isolated IPv6 hosts to communicate over an IPv4 network. However, a
number of problems have been reported regarding the communication between
those isolated IPv6 hosts and the Internet. The method is described in RFC 3056.
IPv6 Tunnel Broker
This method relies on special servers that provide dedicated tunnels for IPv6 hosts.
It is described in RFC 3053.
Basic Networking
313
314
Reference
TLD assignment has become quite confusing for historical reasons. Traditionally, threeletter domain names are used in the USA. In the rest of the world, the two-letter ISO
national codes are the standard. In addition to that, longer TLDs were introduced in
2000 that represent certain spheres of activity (for example, .info, .name, .museum).
In the early days of the Internet (before 1990), the file /etc/hosts was used to store
the names of all the machines represented over the Internet. This quickly proved to be
impractical in the face of the rapidly growing number of computers connected to the
Internet. For this reason, a decentralized database was developed to store the hostnames
in a widely distributed manner. This database, similar to the name server, does not have
the data pertaining to all hosts in the Internet readily available, but can dispatch requests
to other name servers.
The top of the hierarchy is occupied by root name servers. These root name servers
manage the top level domains and are run by the Network Information Center (NIC).
Each root name server knows about the name servers responsible for a given top level
domain. Information about top level domain NICs is available at http://www
.internic.net.
DNS can do more than just resolve hostnames. The name server also knows which host
is receiving e-mails for an entire domainthe mail exchanger (MX).
For your machine to resolve an IP address, it must know about at least one name server
and its IP address. Easily specify such a name server with the help of YaST. If you have
a modem dial-up connection, you may not need to configure a name server manually
at all. The dial-up protocol provides the name server address as the connection is made.
The configuration of name server access with openSUSE is described in Chapter 22,
The Domain Name System (page 353).
The protocol whois is closely related to DNS. With this program, quickly find out
who is responsible for any given domain.
Basic Networking
315
For a detailed overview of the aspects of manual network configuration, see Section 20.5,
Configuring a Network Connection Manually (page 332).
During installation, YaST can be used to configure automatically all interfaces that
have been detected. Additional hardware can be configured any time after installation
in the installed system. The following sections describe the network configuration for
all types of network connections supported by openSUSE.
316
Reference
The Hostname/DNS tab allows to set the hostname of the computer and name servers
to be used. For more information about these options see Section Configuring Hostname
and DNS (page 319). In the Routing tab, you can set the default gateway and routing
details. See Section Configuring Routing (page 320) for more information.
Basic Networking
317
Configuring IP Addresses
If possible, wired network cards that are available during the installation are automatically configured to use automatic address setup, DHCP.
DHCP should also be used if you are using a DSL line but with no static IP assigned
by the ISP (Internet Service Provider). If you decide to use DHCP, configure the details
in DHCP Client Options on Global Options tab of the Network Settings dialog of the
YaST network card configuration module. Specify whether the DHCP client should
ask the server to always broadcast its responses in Request Broadcast response. This
option may be needed if your machine is a mobile client moving between networks. If
you have a virtual host setup where different hosts communicate through the same interface, an DHCP Client Identifier is necessary to distinguish them.
DHCP is a good choice for client configuration but it is not ideal for server configuration.
To set a static IP address, proceed as follows:
1 Select a card from the list of detected cards in the Overview tab of the YaST
network card configuration module and click Configure.
2 In the Address tab, choose Statically assigned IP adress.
3 Enter IP Address and Subnet Mask.
4 Click Next.
5 To activate the configuration, click Finish.
If you use the static address, the name servers and default gateway are not configured
automatically. To configure name servers, proceed as described in Section Configuring
Hostname and DNS (page 319). To configure a gateway, proceed as described in Section
Configuring Routing (page 320).
318
Reference
Configuring Aliases
One network device can have multiple IP addresses, called aliases. To set an alias for
your network card, proceed as follows:
1 Select a card from the list of detected cards in the Overview tab of the YaST
network card configuration module and click Configure.
2 In the Additional Addresses part of the Address tab, click Add.
3 Enter Alias Name, IP Address, and Netmask. Do not include the interface name
in the alias name.
4 Click OK.
5 Click Next.
6 To activate the configuration, click Finish.
Basic Networking
319
which may assign different hostnames, because changing the hostname at runtime
may confuse the graphical desktop.
If you are using DHCP to get an IP address, your hostname will be written to
/etc/hosts by default and be resolvable as a 127.0.0.2 IP address. If you
want to disable this, uncheck Write Hostname to /etc/hosts but note, that your
hostname will not be resolvable without an active network.
3 Enter the name servers and domain search list.
4 To activate the configuration, click Finish.
Configuring Routing
To make your machine communicate with other machines and other networks, routing
information must be given to make network traffic take the correct path. If DHCP is
used, this information is automatically provided. If a static setup is used, this data must
be added manually.
1 Go to the Routing tab of the YaST network card configuration module.
2 Enter the IP of the Default Gateway. The default gateway matches every possible
destination, but poorly. If any other entry exists that matches the required address,
it is used instead of the default route.
3 If you need to add more entries into the Routing Table, check Expert Configuration. Then add an entry with Add. Enter Destination, Netmask and optionally
select the Device to be used.
4 If the system is a router, enable the IP Forwarding option.
5 To activate the configuration, click Finish.
320
Reference
1 Select a card from the list of detected cards in the YaST network card configuration module and click Configure.
2 Go to the Hardware tab. The Matching rule under Udev rules is not editable. It
is the hardware address (MAC) or bus ID udev uses to identify the network card.
You can, however, change the device name of this card by editing it in Device
Name.
3 In Driver name you can set the driver to be used for the network card.
4 Click Next.
5 To activate configuration, click Finish.
Basic Networking
321
322
Reference
Basic Networking
323
20.4.2 Modem
In the YaST Control Center, access the modem configuration under Network Devices
> Modem. If your modem was not automatically detected, open the dialog for manual
configuration by clicking Add. Enter the interface to which the modem is connected
under Modem Device.
TIP: CDMA and GPRS Modems
Configure supported CDMA and GPRS modems with the YaST modem module
just as you would configure regular modems.
Figure 20.4 Modem Configuration
If you are behind a private branch exchange (PBX), you may need to enter a dial prefix.
This is often a zero. Consult the instructions that came with the PBX to find out. Also
select whether to use tone or pulse dialing, whether the speaker should be on, and
whether the modem should wait until it detects a dial tone. The last option should not
be enabled if the modem is connected to an exchange.
Under Details, set the baud rate and the modem initialization strings. Only change these
settings if your modem was not detected automatically or if it requires special settings
324
Reference
for data transmission to work. This is mainly the case with ISDN terminal adapters.
Leave this dialog by clicking OK. To delegate control over the modem to the normal
user without root permissions, activate Enable Device Control for Non-root User via
Kinternet. In this way, a user without administrator permissions can activate or deactivate
an interface. Under Dial Prefix Regular Expression, specify a regular expression. The
Dial Prefix in KInternet, which can be modified by the normal user, must match this
regular expression. If this field is left empty, the user cannot set a different Dial Prefix
without administrator permissions.
In the next dialog, select the ISP. To choose from a predefined list of ISPs operating
in your country, select Country. Alternatively, click New to open a dialog in which to
provide the data for your ISP. This includes a name for the dial-up connection and ISP
as well as the login and password provided by your ISP. Enable Always Ask for Password
to be prompted for the password each time you connect.
In the last dialog, specify additional connection options:
Dial on Demand
If you enable dial on demand, set at least one name server. Use this feature only if
your Internet connection is inexpensive, because there are programs that periodically
request data from the Internet.
Modify DNS when Connected
This option is enabled by default, with the effect that the name server address is
updated each time you connect to the Internet.
Automatically Retrieve DNS
If the provider does not transmit its domain name server after connecting, disable
this option and enter the DNS data manually.
Automatically Reconnect
If this options is enabled, the connection is automatically reestablished after failure.
Ignore prompts
This option disables the detection of any prompts from the dial-up server. If the
connection build-up is slow or does not work at all, try this option.
External Firewall Interface
Selecting this option activates the SUSEfirewall2 and sets the interface as external.
This way, you are protected from outside attacks for the duration of your Internet
connection.
Basic Networking
325
20.4.3 ISDN
Use this module to configure one or several ISDN cards for your system. If YaST did
not detect your ISDN card, click on Add in the ISDN Devices tab and manually select
your card. Multiple interfaces are possible, but several ISPs can be configured for one
interface. In the subsequent dialogs, set the ISDN options necessary for the proper
functioning of the card.
Figure 20.5 ISDN Configuration
326
Reference
In the next dialog, shown in Figure 20.5, ISDN Configuration (page 326), select the
protocol to use. The default is Euro-ISDN (EDSS1), but for older or larger exchanges,
select 1TR6. If you are in the US, select NI1. Select your country in the relevant field.
The corresponding country code then appears in the field next to it. Finally, provide
your Area Code and the Dial Prefix if necessary.
Activate device defines how the ISDN interface should be started: At Boot Time causes
the ISDN driver to be initialized each time the system boots. Manually requires you to
load the ISDN driver as root with the command rcisdn start. On Hotplug, used
for PCMCIA or USB devices, loads the driver after the device is plugged in. When
finished with these settings, select OK.
In the next dialog, specify the interface type for your ISDN card and add ISPs to an
existing interface. Interfaces may be either the SyncPPP or the RawIP type, but most
ISPs operate in the SyncPPP mode, which is described below.
Figure 20.6 ISDN Interface Configuration
The number to enter for My Phone Number depends on your particular setup:
ISDN Card Directly Connected to Phone Outlet
A standard ISDN line provides three phone numbers (called multiple subscriber
numbers, or MSNs). If the subscriber asked for more, there may be up to 10. One
of these MSNs must be entered here, but without your area code. If you enter the
Basic Networking
327
wrong number, your phone operator automatically falls back to the first MSN assigned to your ISDN line.
ISDN Card Connected to a Private Branch Exchange
Again, the configuration may vary depending on the equipment installed:
1. Smaller private branch exchanges (PBX) built for home purposes mostly use
the Euro-ISDN (EDSS1) protocol for internal calls. These exchanges have an
internal S0 bus and use internal numbers for the equipment connected to them.
Use one of the internal numbers as your MSN. You should be able to use at least
one of the exchange's MSNs that have been enabled for direct outward dialing.
If this does not work, try a single zero. For further information, consult the
documentation delivered with your phone exchange.
2. Larger phone exchanges designed for businesses normally use the 1TR6 protocol
for internal calls. Their MSN is called EAZ and usually corresponds to the directdial number. For the configuration under Linux, it should be sufficient to enter
the last digit of the EAZ. As a last resort, try each of the digits from 1 to 9.
For the connection to be terminated just before the next charge unit is due, enable
ChargeHUP. However, remember that may not work with every ISP. You can also
enable channel bundling (multilink PPP) by selecting the corresponding option. Finally,
you can enable SuSEfirewall2 for your link by selecting External Firewall Interface
and Restart Firewall. To enable the normal user without administrator permissions to
activate or deactivate the interface, select the Enable Device Control for Non-root user
via KInternet.
Details opens a dialog in which to implement more complex connection schemes, which
are not relevant for normal home users. Leave the Details dialog by selecting OK.
In the next dialog, make IP address settings. If you have not been given a static IP by
your provider, select Dynamic IP Address. Otherwise, use the fields provided to enter
your host's local IP address and the remote IP address according to the specifications
of your ISP. If the interface should be the default route to the Internet, select Default
Route. Each host can only have one interface configured as the default route. Leave
this dialog by selecting Next.
The following dialog allows you to set your country and select an ISP. The ISPs included
in the list are call-by-call providers only. If your ISP is not in the list, select New. This
opens the Provider Parameters dialog in which to enter all the details for your ISP.
328
Reference
When entering the phone number, do not include any blanks or commas among the
digits. Finally, enter your login and the password as provided by the ISP. When finished,
select Next.
To use Dial on Demand on a stand-alone workstation, also specify the name server
(DNS server). Most ISPs support dynamic DNS, which means the IP address of a name
server is sent by the ISP each time you connect. For a single workstation, however, you
still need to provide a placeholder address like 192.168.22.99. If your ISP does
not support dynamic DNS, specify the name server IP addresses of the ISP. If desired,
specify a time-out for the connectionthe period of network inactivity (in seconds)
after which the connection should be automatically terminated. Confirm your settings
with Next. YaST displays a summary of the configured interfaces. To activate these
settings, select Finish.
20.4.5 DSL
To configure your DSL device, select the DSL module from the YaST Network Devices
section. This YaST module consists of several dialogs in which to set the parameters
of DSL links based on one of the following protocols:
PPP over Ethernet (PPPoE)
Basic Networking
329
330
Reference
To use Dial on Demand on a stand-alone workstation, also specify the name server
(DNS server). Most ISPs support dynamic DNSthe IP address of a name server is
sent by the ISP each time you connect. For a single workstation, however, provide a
placeholder address like 192.168.22.99. If your ISP does not support dynamic
DNS, enter the name server IP address provided by your ISP.
Idle Time-Out (seconds) defines a period of network inactivity after which to terminate
the connection automatically. A reasonable time-out value is between 60 and 300 seconds. If Dial on Demand is disabled, it may be useful to set the time-out to zero to
prevent automatic hang-up.
The configuration of T-DSL is very similar to the DSL setup. Just select T-Online as
your provider and YaST opens the T-DSL configuration dialog. In this dialog, provide
some additional information required for T-DSLthe line ID, the T-Online number,
the user code, and your password. All of these should be included in the information
you received after subscribing to T-DSL.
Basic Networking
331
Command
Function
332
Reference
Command
Function
rcnetwork start to start network interfaces, and
rcnetwork restart to restart them. If you want
to stop, start or restart just one interface, use the command followed by the interface name, for example
rcnetwork restart eth0. If no interface is
specified, the firewall is stopped, started, or restarted
along with the network interfaces. The rcnetwork
status command displays the state of the interfaces,
their IP addresses, and whether an DHCP client is running. With rcnetwork
stop-all-dhcp-clients and rcnetwork
restart-all-dhcp-clients you can stop or
restart DHCP clients running on network interfaces.
More information about udev and persistent device names is available in Chapter 15,
Dynamic Kernel Device Management with udev (page 227).
/etc/sysconfig/network/ifcfg-*
These files contain the configurations for network interfaces. They include information
such as the start mode and the IP address. Possible parameters are described in the
manual page of ifup. Additionally, all variables from the files dhcp, wireless,
and config can be used in the ifcfg-* files if a general setting should be used for
only one interface.
Basic Networking
333
/etc/sysconfig/network/config, dhcp,
wireless
The file config contains general settings for the behavior of ifup, ifdown, and
ifstatus. dhcp contains settings for DHCP and wireless for wireless LAN
cards. The variables in all three configuration files are commented and can also be used
in ifcfg-* files, where they are treated with higher priority.
/etc/sysconfig/network/routes,ifroute-*
The static routing of TCP/IP packets is determined here. All the static routes required
by the various system tasks can be entered in the /etc/sysconfig/network/
routes file: routes to a host, routes to a host via a gateway, and routes to a network.
For each interface that needs individual routing, define an additional configuration file:
/etc/sysconfig/network/ifroute-*. Replace * with the name of the interface. The entries in the routing configuration files look like this:
# Destination
#
127.0.0.0
204.127.235.0
default
207.68.156.51
192.168.0.0
Dummy/Gateway
Netmask
Device
0.0.0.0
0.0.0.0
204.127.235.41
207.68.145.45
207.68.156.51
255.255.255.0
255.255.255.0
0.0.0.0
255.255.255.255
255.255.0.0
lo
eth0
eth0
eth1
eth1
The route's destination is in the first column. This column may contain the IP address
of a network or host or, in the case of reachable name servers, the fully qualified network
or hostname.
The second column contains the default gateway or a gateway through which a host or
network can be accessed. The third column contains the netmask for networks or hosts
behind a gateway. For example, the mask is 255.255.255.255 for a host behind a
gateway.
The fourth column is only relevant for networks connected to the local host such as
loopback, Ethernet, ISDN, PPP, and dummy device. The device name must be entered
here.
An (optional) fifth column can be used to specify the type of a route. Columns that are
not needed should contain a minus sign - to ensure that the parser correctly interprets
the command. For details, refer to the routes(5) man page.
334
Reference
/etc/resolv.conf
The domain to which the host belongs is specified in this file (keyword search). Also
listed is the status of the name server address to access (keyword nameserver).
Multiple domain names can be specified. When resolving a name that is not fully
qualified, an attempt is made to generate one by attaching the individual search entries.
Use multiple name servers by entering several lines, each beginning with nameserver.
Precede comments with # signs. YaST enters the specified name server in this file.
Example 20.5, /etc/resolv.conf (page 335) shows what /etc/resolv.conf
could look like.
Example 20.5 /etc/resolv.conf
# Our domain
search example.com
#
# We use sun (192.168.0.20) as nameserver
nameserver 192.168.0.20
Some services, like pppd (wvdial), ipppd (isdn), dhcp (dhcpcd and
dhclient), and pcmcia modify the file /etc/resolv.conf by means of the
script modify_resolvconf. If the file /etc/resolv.conf has been temporarily modified by this script, it contains a predefined comment giving information about
the service that modified it, the location where the original file has been backed up, and
how to turn off the automatic modification mechanism. If /etc/resolv.conf is
modified several times, the file includes modifications in a nested form. These can be
reverted in a clean way even if this reversal takes place in an order different from the
order in which modifications were introduced. Services that may need this flexibility
include isdn and pcmcia.
If a service was not terminated in a normal, clean way, modify_resolvconf can
be used to restore the original file. Also, on system boot, a check is performed to see
whether there is an uncleaned, modified resolv.conf, for example, after a system
crash, in which case the original (unmodified) resolv.conf is restored.
YaST uses the command modify_resolvconf check to find out whether resolv
.conf has been modified and subsequently warns the user that changes will be lost
after restoring the file. Apart from this, YaST does not rely on modify_resolvconf,
which means that the impact of changing resolv.conf through YaST is the same
as that of any manual change. In both cases, changes have a permanent effect. Modifications requested by the mentioned services are only temporary.
Basic Networking
335
/etc/hosts
In this file, shown in Example 20.6, /etc/hosts (page 336), IP addresses are assigned to hostnames. If no name server is implemented, all hosts to which an IP connection will be set up must be listed here. For each host, enter a line consisting of the IP
address, the fully qualified hostname, and the hostname into the file. The IP address
must be at the beginning of the line and the entries separated by blanks and tabs.
Comments are always preceded by the # sign.
Example 20.6 /etc/hosts
127.0.0.1 localhost
192.168.0.20 sun.example.com sun
192.168.0.1 earth.example.com earth
/etc/networks
Here, network names are converted to network addresses. The format is similar to that
of the hosts file, except the network names precede the addresses. See Example 20.7,
/etc/networks (page 336).
Example 20.7 /etc/networks
loopback
localnet
127.0.0.0
192.168.0.0
/etc/host.conf
Name resolutionthe translation of host and network names via the resolver libraryis
controlled by this file. This file is only used for programs linked to libc4 or libc5. For
current glibc programs, refer to the settings in /etc/nsswitch.conf. A parameter
must always stand alone in its own line. Comments are preceded by a # sign. Table 20.6,
Parameters for /etc/host.conf (page 336) shows the parameters available. A sample
/etc/host.conf is shown in Example 20.8, /etc/host.conf (page 337).
Table 20.6
336
Reference
Specifies in which order the services are accessed for the name
resolution. Available arguments are (separated by blank spaces
or commas):
nospoof on
spoofalert on/off
trim domainname
/etc/nsswitch.conf
The introduction of the GNU C Library 2.0 was accompanied by the introduction of
the Name Service Switch (NSS). Refer to the nsswitch.conf(5) man page and
The GNU C Library Reference Manual for details.
The order for queries is defined in the file /etc/nsswitch.conf. A sample
nsswitch.conf is shown in Example 20.9, /etc/nsswitch.conf (page 338).
Comments are introduced by # signs. In this example, the entry under the hosts
database means that a request is sent to /etc/hosts (files) via DNS.
Basic Networking
337
compat
compat
hosts:
networks:
files dns
files dns
services:
protocols:
db files
db files
netgroup:
automount:
files
files nis
The databases available over NSS are listed in Table 20.7, Databases Available via
/etc/nsswitch.conf (page 338). In addition, automount, bootparams, netmasks,
and publickey are expected in the near future. The configuration options for NSS
databases are listed in Table 20.8, Configuration Options for NSS Databases
(page 339).
Table 20.7
338
aliases
ethers
Ethernet addresses.
group
For user groups, used by getgrent. See also the man page
for group.
hosts
netgroup
Valid host and user lists in the network for the purpose of
controlling access permissions; see the netgroup(5) man
page.
networks
passwd
Reference
protocols
rpc
services
shadow
Table 20.8
files
db
nis, nisplus
dns
compat
/etc/nscd.conf
This file is used to configure nscd (name service cache daemon). See the nscd(8)
and nscd.conf(5) man pages. By default, the system entries of passwd and
groups are cached by nscd. This is important for the performance of directory services,
like NIS and LDAP, because otherwise the network connection needs to be used for
every access to names or groups. hosts is not cached by default, because the mechanism in nscd to cache hosts makes the local system unable to trust forward and reverse
lookup checks. Instead of asking nscd to cache names, set up a caching DNS server.
If the caching for passwd is activated, it usually takes about fifteen seconds until a
newly added local user is recognized. Reduce this waiting time by restarting nscd with
the command rcnscd restart.
Basic Networking
339
/etc/HOSTNAME
This contains the hostname without the domain name attached. This file is read by
several scripts while the machine is booting. It may only contain one line in which the
hostname is set.
340
Reference
maddress
This object represents a multicast address.
mroute
This object represents a multicast routing cache entry.
tunnel
This object represents a tunnel over IP.
If no command is given, the default command is used, usually list.
Change the state of a device with the command ip link
set device_name command. For example, to deactivate device eth0, enter ip
link seteth0 down. To activate it again, use ip link seteth0 up.
After activating a device, you can configure it. To set the IP address, use ip addr
add ip_address + dev device_name. For example, to set the address of the
interface eth0 to 192.168.12.154/30 with standard broadcast (option brd), enter ip
addr add 192.168.12.154/30 brd + dev eth0.
To have a working connection, you must also configure the default gateway. To set a
gateway for your system, enter ip route add gateway_ip_address. To
translate one IP address to another, use nat: ip route add
nat ip_address via other_ip_address.
To display all devices, use ip link ls. To display the running interfaces only, use
ip link ls up. To print interface statistics for a device, enter ip -s link
ls device_name. To view addresses of your devices, enter ip addr. In the output
of the ip addr, also find information about MAC addresses of your devices. To show
all routes, use ip route show.
For more information about using ip, enter ip help or see the ip(8) man page. The
help option is also available for all ip objects. If, for example, you want to read help
for ip addr, enter ip addr help. Find the ip manual in /usr/share/doc/
packages/iproute2/ip-cref.pdf.
Basic Networking
341
to the destination host, requesting an immediate reply. If this works, ping displays a
message to that effect, which indicates that the network link is basically functioning.
ping does more than test only the function of the connection between two computers:
it also provides some basic information about the quality of the connection. In Example 20.10, Output of the Command ping (page 342), you can see an example of the
ping output. The second-to-last line contains information about number of transmitted
packets, packet loss, and total time of ping running.
As the destination, you can use a hostname or IP address, for example,
ping venus.example.com or ping 192.168.2.101. The program sends
packets until you press Ctrl + C.
If you only need to check the functionality of the connection, you can limit the number
of the packets with the -c option. For example to limit ping to three packets, enter
ping -c 3 192.168.2.101.
Example 20.10 Output of the Command ping
ping -c 3 venus.example.com
PING venus.example.com (192.168.2.101) 56(84) bytes of data.
64 bytes from venus.example.com (192.168.2.101): icmp_seq=1 ttl=49 time=188
ms
64 bytes from venus.example.com (192.168.2.101): icmp_seq=2 ttl=49 time=184
ms
64 bytes from venus.example.com (192.168.2.101): icmp_seq=3 ttl=49 time=183
ms
--- venus.example.com ping statistics --3 packets transmitted, 3 received, 0% packet loss, time 2007ms
rtt min/avg/max/mdev = 183.417/185.447/188.259/2.052 ms
The default interval between two packets is one second. To change the interval, ping
provides option -i. For example to increase ping interval to ten seconds, enter ping -i
10 192.168.2.101.
In a system with multiple network devices, it is sometimes useful to send the ping
through a specific interface address. To do so, use the -I option with the name of the
selected device, for example, ping -I wlan1 192.168.2.101.
For more options and information about using ping, enter ping -h or see the ping
(8) man page.
342
Reference
lo
wlan1
For more options and information about using ifconfig, enter ifconfig -h or see
the ifconfig (8) man page.
Basic Networking
343
Genmask
255.255.248.0
255.255.0.0
255.0.0.0
0.0.0.0
Flags
U
U
U
UG
MSS
0
0
0
0
Window
0
0
0
0
irtt
0
0
0
0
Iface
eth0
eth0
lo
eth0
For more options and information about using route, enter route -h or see the route
(8) man page.
/etc/init.d/network
344
Reference
/etc/init.d/xinetd
/etc/init.d/portmap
/etc/init.d/ypserv
/etc/init.d/ypbind
Basic Networking
345
346
Reference
Basic Networking
347
21
The service location protocol (SLP) was developed to simplify the configuration of
networked clients within a local network. To configure a network client, including all
required services, the administrator traditionally needs detailed knowledge of the servers
available in the network. SLP makes the availability of selected services known to all
clients in the local network. Applications that support SLP can use the information
distributed and be configured automatically.
openSUSE supports installation using installation sources provided with SLP and
contains many system services with integrated support for SLP. YaST and Konqueror
both have appropriate front-ends for SLP. You can use SLP to provide networked clients
with central functions, such as an installation server, file server, or print server on your
system.
IMPORTANT: SLP Support in openSUSE
Services that offer SLP support include cupsd, rsyncd, ypserv, openldap2,
openwbem (CIM), ksysguardd, saned, kdm vnc login, smpppd, rpasswd, postfix,
and sshd (via fish).
21.1 Installation
Only an SLP client and slptools are installed by default. If you want to provide services
via SLP, install the package openslp-server. To install the package, start YaST
and select Software > Software Management. Now choose Filter > Patterns and click
349
350
Reference
The most important line in this file is the service URL, which begins with
service:. This contains the service type (scanner.sane) and the address
under which the service is available on the server. $HOSTNAME is automatically
replaced with the full hostname. The name of the TCP port on which the relevant
service can be found follows, separated by a colon. Then enter the language in
which the service should appear and the duration of registration in seconds. These
should be separated from the service URL by commas. Set the value for the duration
of registration between 0 and 65535. 0 prevents registration. 65535 removes all
restrictions.
The registration file also contains the two variables watch-port-tcp and
description. watch-port-tcp links the SLP service announcement to
whether the relevant service is active by having slpd check the status of the service.
351
The second variable contains a more precise description of the service that is displayed in suitable browsers.
Static Registration with /etc/slp.reg
The only difference from the procedure with /etc/slp.reg.d is the grouping
of all services within a central file.
Dynamic Registration with slptool
If a service should be registered for SLP from proprietary scripts, use the slptool
command line front-end.
352
Reference
22
DNS (domain name system) is needed to resolve the domain names and hostnames into
IP addresses. In this way, the IP address 192.168.2.100 is assigned to the hostname
jupiter, for example. Before setting up your own name server, read the general information about DNS in Section 20.3, Name Resolution (page 314). The following
configuration examples refer to BIND.
353
(not expired) zone data. If the slave cannot obtain a new copy of the zone data,
it stops responding for the zone.
Forwarder
Forwarders are DNS servers to which your DNS server should send queries it
cannot answer.
Record
The record is information about name and IP address. Supported records and their
syntax are described in BIND documentation. Some special records are:
NS record
An NS record tells name servers which machines are in charge of a given domain zone.
MX record
The MX (mail exchange) records describe the machines to contact for directing
mail across the Internet.
SOA record
SOA (Start of Authority) record is the first record in a zone file. The SOA
record is used when using DNS to synchronize data between multiple computers.
22.2 Installation
To install a DNS server, start YaST and select Software > Software Management.
Choose Filter > Patterns and select DHCP and DNS Server. Confirm the installation
of the dependent packages to finish the installation process.
354
Reference
2 The DNS Zones dialog consists of several parts and is responsible for the management of zone files, described in Section 22.6, Zone Files (page 367). For a
new zone, provide a name for it in Zone Name. To add a reverse zone, the name
must end in .in-addr.arpa. Finally, select the Zone Type (master or slave).
See Figure 22.2, DNS Server Installation: DNS Zones (page 356). Click Edit
Zone to configure other settings of an existing zone. To remove a zone, click
Delete Zone.
355
3 In the final dialog, you can open the ports for the DNS service in the firewall
that is activated during the installation and decide whether DNS should be started.
The expert configuration can also be accessed from this dialog. See Figure 22.3,
DNS Server Installation: Finish Wizard (page 356).
Figure 22.3 DNS Server Installation: Finish Wizard
356
Reference
Logging
To set what the DNS server should log and how, select Logging. Under Log Type,
specify where the DNS server should write the log data. Use the systemwide log file
/var/log/messages by selecting System Log or specify a different file by selecting
File. In the latter case, additionally specify a name, the maximum file size in megabytes
and the number of versions of log files to store.
Further options are available under Additional Logging. Enabling Log All DNS Queries
causes every query to be logged, in which case the log file could grow extremely large.
For this reason, it is not a good idea to enable this option for other than debugging
purposes. To log the data traffic during zone updates between DHCP and DNS server,
enable Log Zone Updates. To log the data traffic during a zone transfer from master to
slave, enable Log Zone Transfer. See Figure 22.4, DNS Server: Logging (page 358).
357
358
Reference
359
360
Reference
361
provider. BIND carries out name resolution via the root name server, a notably slower
process. Normally, the DNS of the provider should be entered with its IP address in the
configuration file /etc/named.conf under forwarders to ensure effective and
secure name resolution. If this works so far, the name server runs as a pure cachingonly name server. Only when you configure its own zones will it become a proper DNS.
A simple example of this is included in the documentation in /usr/share/doc/
packages/bind/config.
TIP: Automatic Adaptation of the Name Server Information
Depending on the type of Internet connection or the network connection, the
name server information can automatically be adapted to the current conditions.
To do this, set the variable MODIFY_NAMED_CONF_DYNAMICALLY in the file
/etc/sysconfig/network/config to yes.
However, do not set up any official domains until assigned one by the responsible institution. Even if you have your own domain and it is managed by the provider, you are
better off not using it, because BIND would otherwise not forward requests for this
domain. The Web server at the provider, for example, would not be accessible for this
domain.
To start the name server, enter the command rcnamed start as root. If done
appears to the right in green, named, as the name server process is called, has been
started successfully. Test the name server immediately on the local system with the
host or dig programs, which should return localhost as the default server with
the address 127.0.0.1. If this is not the case, /etc/resolv.conf probably
contains an incorrect name server entry or the file does not exist at all. For the first test,
enter host 127.0.0.1, which should always work. If you get an error message, use
rcnamed status to see whether the server is actually running. If the name server
does not start or behaves unexpectedly, you can usually find the cause in the log file
/var/log/messages.
To use the name server of the provider or one already running on your network as the
forwarder, enter the corresponding IP address or addresses in the options section
under forwarders. The addresses included in Example 22.1, Forwarding Options
in named.conf (page 363) are just examples. Adjust these entries to your own setup.
362
Reference
The options entry is followed by entries for the zone, localhost, and
0.0.127.in-addr.arpa. The type hint entry under . should always be
present. The corresponding files do not need to be modified and should work as they
are. Also make sure that each entry is closed with a ; and that the curly braces are in
the correct places. After changing the configuration file /etc/named.conf or the
zone files, tell BIND to reread them with rcnamed reload. Achieve the same by
stopping and restarting the name server with rcnamed restart. Stop the server at
any time by entering rcnamed stop.
363
364
Reference
127.0.0.1 to permit requests from the local host. If you omit this entry entirely,
all interfaces are used by default.
listen-on-v6 port 53 {any; };
Tells BIND on which port it should listen for IPv6 client requests. The only alternative to any is none. As far as IPv6 is concerned, the server only accepts a wild
card address.
query-source address * port 53;
This entry is necessary if a firewall is blocking outgoing DNS requests. This tells
BIND to post requests externally from port 53 and not from any of the high ports
above 1024.
query-source-v6 address * port 53;
Tells BIND which port to use for IPv6 queries.
allow-query { 127.0.0.1; net; };
Defines the networks from which clients can post DNS requests. Replace net with
address information like 192.168/16. The /16 at the end is an abbreviated expression for the netmask, in this case, 255.255.0.0.
allow-transfer ! *;;
Controls which hosts can request zone transfers. In the example, such requests are
completely denied with ! *. Without this entry, zone transfers can be requested
from anywhere without restrictions.
statistics-interval 0;
In the absence of this entry, BIND generates several lines of statistical information
per hour in /var/log/messages. Set it to 0 to suppress these statistics completely or set an interval in minutes.
cleaning-interval 720;
This option defines at which time intervals BIND clears its cache. This triggers an
entry in /var/log/messages each time it occurs. The time specification is in
minutes. The default is 60 minutes.
interface-interval 0;
BIND regularly searches the network interfaces for new or nonexistent interfaces.
If this value is set to 0, this is not done and BIND only listens at the interfaces detected at start-up. Otherwise, the interval can be defined in minutes. The default is
sixty minutes.
The Domain Name System
365
notify no;
no prevents other name servers from being informed when changes are made to
the zone data or when the name server is restarted.
22.5.2 Logging
What, how, and where logging takes place can be extensively configured in BIND.
Normally, the default settings should be sufficient. Example 22.3, Entry to Disable
Logging (page 366) shows the simplest form of such an entry and completely suppresses
any logging.
Example 22.3 Entry to Disable Logging
logging {
category default { null; };
};
After zone, specify the name of the domain to administer (example.com) followed
by in and a block of relevant options enclosed in curly braces, as shown in Example 22.4, Zone Entry for example.com (page 366). To define a slave zone, switch the
type to slave and specify a name server that administers this zone as master
(which, in turn, may be a slave of another master), as shown in Example 22.5, Zone
Entry for example.net (page 366).
Example 22.5 Zone Entry for example.net
zone "example.net" in {
type slave;
file "slave/example.net.zone";
masters { 10.0.0.1; };
};
366
Reference
367
$TTL 2D
example.com. IN SOA
2003072441
1D
2H
1W
2D )
gate
dns
mail
jupiter
venus
saturn
mercury
ntp
dns root.example.com. (
; serial
; refresh
; retry
; expiry
; minimum
IN NS
IN MX
dns
10 mail
IN
IN
IN
IN
IN
IN
IN
IN
IN
192.168.5.1
10.0.0.1
192.168.1.116
192.168.3.108
192.168.2.100
192.168.2.101
192.168.2.102
192.168.2.103
dns
A
A
A
A
A
A
A
A
CNAME
Line 1:
$TTL defines the default time to live that should apply to all the entries in this file.
In this example, entries are valid for a period of two days (2 D).
Line 2:
This is where the SOA (start of authority) control record begins:
The name of the domain to administer is example.com in the first position.
This ends with ., because otherwise the zone would be appended a second time.
368
Reference
Alternatively, @ can be entered here, in which case the zone would be extracted
from the corresponding entry in /etc/named.conf.
After IN SOA is the name of the name server in charge as master for this zone.
The name is expanded from dns to dns.example.com, because it does not
end with a ..
An e-mail address of the person in charge of this name server follows. Because
the @ sign already has a special meaning, . is entered here instead. For
[email protected] the entry must read root.example.com.. The .
must be included at the end to prevent the zone from being added.
The ( includes all lines up to ) into the SOA record.
Line 3:
The serial number is an arbitrary number that is increased each time this file
is changed. It is needed to inform the secondary name servers (slave servers) of
changes. For this, a 10 digit number of the date and run number, written as
YYYYMMDDNN, has become the customary format.
Line 4:
The refresh rate specifies the time interval at which the secondary name
servers verify the zone serial number. In this case, one day.
Line 5:
The retry rate specifies the time interval at which a secondary name server,
in case of error, attempts to contact the primary server again. Here, two hours.
Line 6:
The expiration time specifies the time frame after which a secondary name
server discards the cached data if it has not regained contact to the primary server.
Here, it is a week.
Line 7:
The last entry in the SOA record specifies the negative caching TTLthe
time for which results of unresolved DNS queries from other servers may be cached.
Line 9:
The IN NS specifies the name server responsible for this domain. dns is extended
to dns.example.com because it does not end with a .. There can be several
lines like thisone for the primary and one for each secondary name server. If
The Domain Name System
369
notify is not set to no in /etc/named.conf, all the name servers listed here
are informed of the changes made to the zone data.
Line 10:
The MX record specifies the mail server that accepts, processes, and forwards emails for the domain example.com. In this example, this is the host
mail.example.com. The number in front of the hostname is the preference
value. If there are multiple MX entries, the mail server with the smallest value is
taken first and, if mail delivery to this server fails, an attempt is made with the next
higher value.
Lines 1219:
These are the actual address records where one or more IP addresses are assigned
to hostnames. The names are listed here without a . because they do not include
their domain, so example.com is added to all of them. Two IP addresses are assigned to the host gate, because it has two network cards. Wherever the host address is a traditional one (IPv4), the record is marked with AAAA. If the address is
an IPv6 address, the entry is marked with AAAA 0. The previous token for IPv6
addresses was only AAAA, which is now obsolete.
NOTE: IPv6 Syntax
The IPv6 record has a slightly different syntax than IPv4. Because of the
fragmentation possibility, it is necessary to provide information about
missed bits before the address. You must provide this information even if
you want to use a completely unfragmented address. For the IPv4 record
with the syntax
pluto IN
pluto IN
AAAA 2345:00C1:CA11:0001:1234:5678:9ABC:DEF0
AAAA 2345:00D2:DA11:0001:1234:5678:9ABC:DEF0
You need to add information about missing bits in IPv6 format. Because
the example above is complete (does not miss any bits), the IPv6 format
of this record is:
pluto
pluto
IN
IN
AAAA 0 2345:00C1:CA11:0001:1234:5678:9ABC:DEF0
AAAA 0 2345:00D2:DA11:0001:1234:5678:9ABC:DEF0
370
Reference
$TTL 2D
168.192.in-addr.arpa.
1.5
100.3
253.2
dns.example.com.
IN PTR
IN PTR
IN PTR
gate.example.com.
www.example.com.
cups.example.com.
Line 1:
$TTL defines the standard TTL that applies to all entries here.
Line 2:
The configuration file should activate reverse lookup for the network 192.168.
Given that the zone is called 168.192.in-addr.arpa, should not be added
to the hostnames. Therefore, all hostnames are entered in their complete formwith
their domain and with a . at the end. The remaining entries correspond to those
described for the previous example.com example.
Lines 37:
See the previous example for example.com.
Line 9:
Again this line specifies the name server responsible for this zone. This time,
however, the name is entered in its complete form with the domain and a . at the
end.
Lines 1113:
These are the pointer records hinting at the IP addresses on the respective hosts.
Only the last part of the IP address is entered at the beginning of the line, without
the . at the end. Appending the zone to this (without the .in-addr.arpa) results
in the complete IP address in reverse order.
371
Normally, zone transfers between different versions of BIND should be possible without
any problem.
372
Reference
ple). On the remote server, the key must be included in the file /etc/named.conf
to enable a secure communication between host1 and host2:
key host1-host2. {
algorithm hmac-md5;
secret ";ejIkuCyyGJwwuN3xAteKgg==;
};
"filename"
This topic is discussed in more detail in the BIND Administrator Reference Manual
under update-policy.
373
A zone considered secure must have one or several zone keys associated with it. These
are generated with dnssec-keygen, just like the host keys. The DSA encryption
algorithm is currently used to generate these keys. The public keys generated should
be included in the corresponding zone file with an $INCLUDE rule.
With the command dnssec-makekeyset, all keys generated are packaged into one
set, which must then be transferred to the parent zone in a secure manner. On the parent,
the set is signed with dnssec-signkey. The files generated by this command are
then used to sign the zones with dnssec-signzone, which in turn generates the
files to include for each zone in /etc/named.conf.
374
Reference
23
DHCP
The purpose of the dynamic host configuration protocol (DHCP) is to assign network
settings centrally from a server rather than configuring them locally on each and every
workstation. A host configured to use DHCP does not have control over its own static
address. It is enabled to configure itself completely and automatically according to directions from the server. If you use the NetworkManager on the client side, you do not
need to configure the client at all. This is useful if you have changing environments
and only one interface active at a time. Never use NetworkManager on a machine that
runs a DHCP server.
One way to configure a DHCP server is to identify each client using the hardware address
of its network card (which should be fixed), then supply that client with identical settings
each time it connects to the server. DHCP can also be configured to assign addresses
to each interested client dynamically from an address pool set up for that purpose. In
the latter case, the DHCP server tries to assign the same address to the client each time
it receives a request, even over longer periods. This works only if the network does not
have more clients than addresses.
DHCP makes life easier for system administrators. Any changes, even bigger ones, related to addresses and the network configuration in general can be implemented centrally
by editing the server's configuration file. This is much more convenient than reconfiguring numerous workstations. Also it is much easier to integrate machines, particularly
new machines, into the network, because they can be given an IP address from the pool.
Retrieving the appropriate network settings from a DHCP server is especially useful
in the case of laptops regularly used in different networks.
In this chapter, the DHCP server will run in the same subnet as the workstations,
192.168.2.0/24 with 192.168.2.1 as gateway. It has the fixed IP address 192.168.2.254
DHCP
375
376
Reference
Global Settings
Use the check box to determine whether your DHCP settings should be automatically stored by an LDAP server. In the entry fields, provide the network specifics
for all clients the DHCP server should manage. These specifics are the domain
name, address of a time server, addresses of the primary and secondary name
server, addresses of a print and a WINS server (for a mixed network with both
Windows and Linux clients), gateway address, and lease time. See Figure 23.2,
DHCP Server: Global Settings (page 378).
DHCP
377
Dynamic DHCP
In this step, configure how dynamic IP addresses should be assigned to clients. To
do so, specify an IP range from which the server can assign addresses to DHCP
clients. All these addresses must be covered by the same netmask. Also specify the
lease time during which a client may keep its IP address without needing to request
an extension of the lease. Optionally, specify the maximum lease timethe period
during which the server reserves an IP address for a particular client. See Figure 23.3, DHCP Server: Dynamic DHCP (page 379).
378
Reference
DHCP
379
Host Management
Instead of using dynamic DHCP in the way described in the preceding sections,
you can also configure the server to assign addresses in quasi-static fashion. To do
so, use the entry fields provided in the lower part to specify a list of the clients to
manage in this way. Specifically, provide the Name and the IP Address to give to
such a client, the Hardware Address, and the Network Type (token ring or ethernet).
Modify the list of clients, which is shown in the upper part, with Add, Edit, and
Delete from List. See Figure 23.5, DHCP Server: Host Management (page 381).
380
Reference
DHCP
381
382
Reference
Subnet Configuration
This dialog allows you specify a new subnet with its IP address and netmask. In
the middle part of the dialog, modify the DHCP server start options for the selected
subnet using Add, Edit, and Delete. To set up dynamic DNS for the subnet, select
Dynamic DNS.
DHCP
383
384
Reference
DHCP
385
386
Reference
After completing all configuration steps, close the dialog with Ok. Now the server is
started with its new configuration. configuration.
DHCP
387
# 10 minutes
# 2 hours
domain-name "example.com";
domain-name-servers 192.168.1.116;
broadcast-address 192.168.2.255;
routers 192.168.2.1;
subnet-mask 255.255.255.0;
This simple configuration file should be sufficient to get the DHCP server to assign IP
addresses in the network. Make sure that a semicolon is inserted at the end of each line,
because otherwise dhcpd is not started.
The sample file can be divided into three sections. The first one defines how many
seconds an IP address is leased to a requesting client by default
(default-lease-time) before it should apply for renewal. The section also includes
a statement of the maximum period for which a machine may keep an IP address assigned
by the DHCP server without applying for renewal (max-lease-time).
In the second part, some basic network parameters are defined on a global level:
The line option domain-name defines the default domain of your network.
With the entry option domain-name-servers, specify up to three values
for the DNS servers used to resolve IP addresses into hostnames and vice versa.
Ideally, configure a name server on your machine or somewhere else in your network
388
Reference
before setting up DHCP. That name server should also define a hostname for each
dynamic address and vice versa. To learn how to configure your own name server,
read Chapter 22, The Domain Name System (page 353).
The line option broadcast-address defines the broadcast address the requesting client should use.
With option routers, set where the server should send data packets that
cannot be delivered to a host on the local network (according to the source and
target host address and the subnet mask provided). In most cases, especially in
smaller networks, this router is identical to the Internet gateway.
With option subnet-mask, specify the netmask assigned to clients.
The last section of the file defines a network, including a subnet mask. To finish,
specify the address range that the DHCP daemon should use to assign IP addresses to
interested clients. In Example 23.1, The Configuration File /etc/dhcpd.conf (page 388),
clients may be given any address between 192.168.2.10 and 192.168.2.20 as
well as 192.168.2.100 and 192.168.2.200.
After editing these few lines, you should be able to activate the DHCP daemon with
the command rcdhcpd start. It will be ready for use immediately. Use the command
rcdhcpd check-syntax to perform a brief syntax check. If you encounter any
unexpected problems with your configurationthe server aborts with an error or does
not return done on startyou should be able to find out what has gone wrong by
looking for information either in the main system log /var/log/messages or on
console 10 (Ctrl + Alt + F10).
On a default openSUSE system, the DHCP daemon is started in a chroot environment
for security reasons. The configuration files must be copied to the chroot environment
so the daemon can find them. Normally, there is no need to worry about this because
the command rcdhcpd start automatically copies the files.
DHCP
389
there were not enough addresses available and the server needed to redistribute them
among clients.
To identify a client configured with a static address, dhcpd uses the hardware address,
which is a globally unique, fixed numerical code consisting of six octet pairs for the
identification of all network devices (for example, 00:30:6E:08:EC:80). If the
respective lines, like the ones in Example 23.2, Additions to the Configuration File
(page 390), are added to the configuration file of Example 23.1, The Configuration
File /etc/dhcpd.conf (page 388), the DHCP daemon always assigns the same set of
data to the corresponding client.
Example 23.2 Additions to the Configuration File
host jupiter {
hardware ethernet 00:30:6E:08:EC:80;
fixed-address 192.168.2.100;
}
The name of the respective client (host hostname, here jupiter) is entered in
the first line and the MAC address in the second line. On Linux hosts, find the MAC
address with the command ip link show followed by the network device (for example, eth0). The output should contain something like
link/ether 00:30:6E:08:EC:80
In the preceding example, a client with a network card having the MAC address
00:30:6E:08:EC:80 is assigned the IP address 192.168.2.100 and the hostname
jupiter automatically. The type of hardware to enter is ethernet in nearly all
cases, although token-ring, which is often found on IBM systems, is also supported.
390
Reference
To enable dhcpd to resolve hostnames even from within the chroot environment, some
other configuration files must be copied as well:
/etc/localtime
/etc/host.conf
/etc/hosts
/etc/resolv.conf
These files are copied to /var/lib/dhcp/etc/ when starting the init script. Take
these copies into account for any changes that they require if they are dynamically
modified by scripts like /etc/ppp/ip-up. However, there should be no need to
worry about this if the configuration file only specifies IP addresses (instead of hostnames).
If your configuration includes additional files that should be copied into the chroot environment, set these under the variable DHCPD_CONF_INCLUDE_FILES in the file
/etc/sysconfig/dhcpd. To ensure that the DHCP logging facility keeps working
even after a restart of the syslog-ng daemon, there is an additional entry
SYSLOGD_ADDITIONAL_SOCKET_DHCP in the file /etc/sysconfig/syslog.
DHCP
391
24
The NTP (network time protocol) mechanism is a protocol for synchronizing the system
time over the network. First, a machine can obtain the time from a server that is a reliable
time source. Second, a machine can itself act as a time source for other computers in
the network. The goal is twofoldmaintaining the absolute time and synchronizing
the system time of all machines within a network.
Maintaining an exact system time is important in many situations. The built-in hardware
(BIOS) clock does often not meet the requirements of applications like databases.
Manual correction of the system time would lead to severe problems because, for example, a backward leap can cause malfunction of critical applications. Within a network,
it is usually necessary to synchronize the system time of all machines, but manual time
adjustment is a bad approach. xntp provides a mechanism to solve these problems. It
continuously adjusts the system time with the help of reliable time servers in the network.
It further enables the management of local reference clocks, such as radio-controlled
clocks.
393
firewall-protected system, the advanced configuration can open the required ports in
SuSEfirewall2.
In the detailed server selection dialog, determine whether to implement time synchronization using a time server from your local network (Local NTP Server) or an Internetbased time server that takes care of your time zone (Public NTP Server). For a local
time server, click Lookup to start an SLP query for available time servers in your network. Select the most suitable time server from the list of search results and exit the
dialog with OK. For a public time server, select your country (time zone) and a suitable
server from the list under Public NTP Server then exit the dialog with OK. In the main
394
Reference
dialog, test the availability of the selected server with Test and quit the dialog with
Finish.
On the General Settings tab, configure the mode of operation of xntpd. Configure NTP
Daemon via DHCP sets up the NTP client to get a list of the NTP servers available in
your network via DHCP.
The servers and other time sources for the client to query are listed in the lower part.
Modify this list as needed with Add, Edit, and Delete. Display Log provides the possibility to view the log files of your client.
395
Click Add to add a new source of time information. In the following dialog, select the
type of source with which the time synchronization should be made. The following
options are available:
Server
Another dialog enables you to select an NTP server (as described in Section 24.1.1,
Quick NTP Client Configuration (page 394)). Activate Use for Initial Synchronization to trigger the synchronization of the time information between the server
and the client when the system is booted. Options allows you to specify additional
options for xntpd.
Using Access Control Options, you can restrict the actions that the remote computer
can perform with the daemon running on your computer. This field is enabled only
after checking Restrict NTP Service to Configured Servers Only on the Security
Settings tab. The options correspond to the restrict clauses in /etc/ntp
.conf. For example, nomodify notrap noquery disallows the server to
modify NTP settings of your computer and to use the trap facility (a remote event
logging feature) of your NTP daemon. Using these restrictions is recommended
for servers out of your control (e.g., on the Internet).
Refer to /usr/share/doc/packages/xntp-doc (part of the xntp-doc
package) for detailed information.
Peer
A peer is a machine to which a symmetric relationship is established: it acts both
as a time server and as a client. To use a peer in the same network instead of a
server, enter the address of the system. The rest of the dialog is identical to the
Server dialog.
Radio Clock
To use a radio clock in your system for the time synchronization, enter the clock
type, unit number, device name, and other options in this dialog. Click Driver
Calibration to fine-tune the driver. Detailed information about the operation of a
local radio clock is available in /usr/share/doc/packages/xntp-doc/
refclock.html.
Outgoing Broadcast
Time information and queries can also be transmitted by broadcast in the network.
In this dialog, enter the address to which such broadcasts should be sent. Do not
396
Reference
activate broadcasting unless you have a reliable time source like a radio controlled
clock.
Incoming Broadcast
If you want your client to receive its information via broadcast, enter the address
from which the respective packets should be accepted in this fields.
Figure 24.3 Advanced NTP Client Configuration: Security Settings
On the Security Settings tab, determine whether xntpd should be started in a chroot jail.
By default, Run NTP Daemon in Chroot Jail is activated. This increases the security
in the event of an attack over xntpd, because it prevents the attacker from compromising
the entire system.
Restrict NTP Service to Configured Servers Only increases the security of your system
by disallowing remote computers to view and modify NTP settings of your computer
and to use the trap facility for remote event logging. Once enabled, these restrictions
apply to all remote computers, unless you override the access control options for individual computers in the list of time sources on the General Settings tab. For all other
remote computers, only querying for local time is allowed.
Enable Open Port in Firewall if SuSEfirewall2 is active, which it is by default. If you
leave the port closed, it is not possible to establish a connection to the time server.
397
To add more time servers, insert additional lines with the keyword server. After
initializing xntpd with the command rcntpd start, it takes about one hour until
the time is stabilized and the drift file for correcting the local computer clock is created.
With the drift file, the systematic error of the hardware clock can be computed as soon
as the computer is powered on. The correction is used immediately, resulting in a
higher stability of the system time.
There are two possible ways to use the NTP mechanism as a client: First, the client can
query the time from a known server in regular intervals. With many clients, this approach
can cause a high load on the server. Second, the client can wait for NTP broadcasts sent
out by broadcast time servers in the network. This approach has the disadvantage that
the quality of the server is unknown and a server sending out wrong information can
cause severe problems.
If the time is obtained via broadcast, you do not need the server name. In this case, enter
the line broadcastclient in the configuration file /etc/ntp.conf. To use one
or more known time servers exclusively, enter their names in the line starting with
servers.
398
Reference
127.127.t.u. Here, t stands for the type of the clock and determines which driver
is used and u for the unit, which determines the interface used.
Normally, the individual drivers have special parameters that describe configuration
details. The file /usr/share/doc/packages/xntp-doc/drivers/driverNN
.html (where NN is the number of the driver) provides information about the particular
type of clock. For example, the type 8 clock (radio clock over serial interface) requires
an additional mode that specifies the clock more precisely. The Conrad DCF77 receiver
module, for example, has mode 5. To use this clock as a preferred reference, specify
the keyword prefer. The complete server line for a Conrad DCF77 receiver
module would be:
server 127.127.8.0 mode 5 prefer
Other clocks follow the same pattern. Following the installation of the xntp-doc
package, the documentation for xntp is available in the directory /usr/share/doc/
packages/xntp-doc. The file /usr/share/doc/packages/xntp-doc/
refclock.html provides links to the driver pages describing the driver parameters.
399
25
Using NIS
Using NIS
401
and set up slave servers in the subnets as described in Section 25.1.2, Configuring
a NIS Slave Server (page 406).
402
Reference
Using NIS
403
3e Leave this dialog with Next or click Other global settings to make additional
settings. Other global settings include changing the source directory of the
NIS server (/etc by default). In addition, passwords can be merged here.
The setting should be Yes so the files (/etc/passwd, /etc/shadow,
and /etc/group) are used to build the user database. Also determine the
smallest user and group ID that should be offered by NIS. Click OK to confirm your settings and return to the previous screen.
Figure 25.3 Changing the Directory and Synchronizing Files for a NIS Server
4 If you previously enabled Active Slave NIS Server Exists, enter the hostnames
used as slaves and click Next.
5 If you do not use slave servers, the slave configuration is skipped and you continue directly to the dialog for the database configuration. Here, specify the maps,
the partial databases to transfer from the NIS server to the client. The default
settings are usually adequate. Leave this dialog with Next.
6 Check which maps should be available and click Next to continue.
404
Reference
7 Enter the hosts that are allowed to query the NIS server. You can add, edit, or
delete hosts by clicking the appropriate button. Specify from which networks
requests can be sent to the NIS server. Normally, this is your internal network.
In this case, there should be the following two entries:
255.0.0.0
0.0.0.0
127.0.0.0
0.0.0.0
The first entry enables connections from your own host, which is the NIS server.
The second one allows all hosts to send requests to the server.
Using NIS
405
406
Reference
3c Set This host is also a NIS client if you want to enable user logins on this
server.
3d Adapt the firewall settings with Open Ports in Firewall.
3e Click Next.
4 Enter the hosts that are allowed to query the NIS server. You can add, edit, or
delete hosts by clicking the appropriate button. Specify from which networks
requests can be sent to the NIS server. Normally, this is all hosts. In this case,
there should be the following two entries:
255.0.0.0
0.0.0.0
127.0.0.0
0.0.0.0
The first entry enables connections from your own host, which is the NIS server.
The second one allows all hosts with access to the same network to send requests
to the server.
5 Click Finish to save changes and exit the setup.
Using NIS
407
In the expert settings, disable Answer Remote Hosts if you do not want other hosts to
be able to query which server your client is using. By checking Broken Server, the client
is enabled to receive replies from a server communicating through an unprivileged port.
For further information, see man ypbind.
After you have made your settings, click Finish to save them and return to the YaST
control center.
Figure 25.6 Setting Domain and Address of a NIS Server
408
Reference
26
409
410
Reference
411
leaf
These objects sit at the end of a branch and have no subordinate objects. Examples
are person, InetOrgPerson, or groupofNames.
The top of the directory hierarchy has a root element root. This can contain c (country),
dc (domain component), or o (organization) as subordinate elements. The relations
within an LDAP directory tree become more evident in the following example, shown
in Figure 26.1, Structure of an LDAP Directory (page 412).
Figure 26.1 Structure of an LDAP Directory
dc=example,dc=com
ou=devel
ou=doc
cn=Tux Linux
ou=it
cn=Geeko Linux
The complete diagram is a fictional directory information tree. The entries on three
levels are depicted. Each entry corresponds to one box in the picture. The complete,
valid distinguished name for the fictional employee Geeko Linux, in this case, is
cn=Geeko Linux,ou=doc,dc=example,dc=com. It is composed by adding
the RDN cn=Geeko Linux to the DN of the preceding entry
ou=doc,dc=example,dc=com.
The types of objects that should be stored in the DIT are globally determined following
a scheme. The type of an object is determined by the object class. The object class determines what attributes the concerned object must or can be assigned. A scheme,
therefore, must contain definitions of all object classes and attributes used in the desired
application scenario. There are a few common schemes (see RFC 2252 and 2256). It
412
Reference
is, however, possible to create custom schemes or to use multiple schemes complementing each other if this is required by the environment in which the LDAP server should
operate.
Table 26.1, Commonly Used Object Classes and Attributes (page 413) offers a small
overview of the object classes from core.schema and inetorgperson.schema
used in the example, including required attributes and valid attribute values.
Table 26.1
Object Class
Meaning
Example Entry
Required Attributes
dcObject
domainComponent (name
components of the domain)
example
dc
organizationalUnit
doc
ou
inetOrgPerson
sn and cn
Example 26.1, Excerpt from schema.core (page 413) shows an excerpt from a scheme
directive with explanations (line numbering for explanatory reasons).
Example 26.1 Excerpt from schema.core
#1 attributetype (2.5.4.11 NAME ( 'ou' 'organizationalUnitName')
#2
DESC 'RFC2256: organizational unit this object belongs to'
#3
SUP name )
...
#4 objectclass ( 2.5.6.5 NAME 'organizationalUnit'
#5
DESC 'RFC2256: an organizational unit'
#6
SUP top STRUCTURAL
#7
MUST ou
#8 MAY (userPassword $ searchGuide $ seeAlso $ businessCategory
$ x121Address $ registeredAddress $ destinationIndicator
$ preferredDeliveryMethod $ telexNumber
$ teletexTerminalIdentifier $ telephoneNumber
$ internationaliSDNNumber $ facsimileTelephoneNumber
$ street $ postOfficeBox $ postalCode $ postalAddress
$ physicalDeliveryOfficeName
$ st $ l $ description) )
...
413
414
Reference
415
2 With Log Level Settings, configure the degree of logging activity (verbosity) of
the LDAP server. From the predefined list, select or deselect the logging options
according to your needs. The more options are enabled, the larger your log files
grow.
3 Determine the connection types the LDAP server should allow. Choose from:
bind_v2
This option enables connection requests (bind requests) from clients using
the previous version of the protocol (LDAPv2).
bind_anon_cred
Normally the LDAP server denies any authentication attempts with empty
credentials (DN or password). Enabling this option, however, makes it possible to connect with a password and no DN to establish an anonymous
connection.
bind_anon_dn
Enabling this option makes it possible to connect without authentication
(anonymously) using a DN but no password.
update_anon
Enabling this option allows non-authenticated (anonymous) update operations.
Access is restricted according to ACLs and other rules (see Section 26.7.1,
Global Directives in slapd.conf (page 430)).
4 To configure secure communication between client and server, proceed with TLS
Settings:
4a Set TLS Active to Yes to enable TLS and SSL encryption of the client/server
communication.
4b Click Select Certificate and determine how to obtain a valid certificate.
Choose Import Certificate (import certificate from external source) or Use
Common Server Certificate (use the certificate created during installation).
If you opted for importing a certificate, YaST prompts you to specify
the exact path to its location.
If you opted for using the common server certificate and it has not been
created during installation, it is subsequently created.
416
Reference
417
Reference
3b Determine the time between a password expiration warning and the actual
password expiration.
3c Set the number of postponement uses of an expired password before the
password expires entirely.
4 Configure the lockout policies:
4a Enable password locking.
4b Determine the number of bind failures that trigger a password lock.
4c Determine the duration of the password lock.
4d Determine for how long password failures are kept in the cache before they
are purged.
5 Apply your password policy settings with Accept.
To edit a previously created database, select its base DN in the tree to the left. In the
right part of the window, YaST displays a dialog similar to the one used for the creation
of a new databasewith the main difference that the base DN entry is grayed out and
cannot be changed.
After leaving the LDAP server configuration by selecting Finish, you are ready to go
with a basic working configuration for your LDAP server. To fine-tune this setup, edit
the file /etc/openldap/slapd.conf accordingly then restart the server.
419
let YaST manage users over LDAP. This basic setup is described in Section 26.4.1,
Configuring Basic Settings (page 420).
Use the YaST LDAP client to further configure the YaST group and user configuration
modules. This includes manipulating the default settings for new users and groups and
the number and nature of the attributes assigned to a user or a group. LDAP user management allows you to assign far more and different attributes to users and groups than
traditional user or group management solutions. This is described in Section 26.4.2,
Configuring the YaST Group and User Administration Modules (page 423).
To authenticate users of your machine against an OpenLDAP server and enable user
management via OpenLDAP, proceed as follows:
420
Reference
1 Click Use LDAP to enable the use of LDAP. Select Use LDAP but Disable Logins
instead if you want to use LDAP for authentication, but do not want other users
to log in to this client.
2 Enter the IP address of the LDAP server to use.
3 Enter the LDAP base DN to select the search base on the LDAP server. To retrieve
the base DN automatically, click Fetch DN. YaST then checks for any LDAP
database on the server address specified above. Choose the appropriate base DN
from the search results given by YaST.
4 If TLS or SSL protected communication with the server is required, select LDAP
TLS/SSL.
5 If the LDAP server still uses LDAPv2, explicitly enable the use of this protocol
version by selecting LDAP Version 2.
6 Select Start Automounter to mount remote directories on your client, such as a
remotely managed /home.
7 Select Create Home Directory on Login to have a user's home automatically
created on the first user login.
8 Click Finish to apply your settings.
To modify data on the server as administrator, click Advanced Configuration. The following dialog is split in two tabs. See Figure 26.4, YaST: Advanced Configuration
(page 422).
421
1 In the Client Settings tab, adjust the following settings to your needs:
1a If the search base for users, passwords, and groups differs from the global
search base specified the LDAP base DN, enter these different naming contexts in User Map, Password Map, and Group Map.
1b Specify the password change protocol. The standard method to use whenever
a password is changed is crypt, meaning that password hashes generated
by crypt are used. For details on this and other options, refer to the
pam_ldap man page.
1c Specify the LDAP group to use with Group Member Attribute. The default
value for this is member.
2 In Administration Settings, adjust the following settings:
2a Set the base for storing your user management data via Configuration Base
DN.
2b Enter the appropriate value for Administrator DN. This DN must be identical
with the rootdn value specified in /etc/openldap/slapd.conf to
422
Reference
enable this particular user to manipulate data stored on the LDAP server.
Enter the full DN (such as cn=Administrator,dc=example,dc=com)
or activate Append Base DN to have the base DN added automatically when
you enter cn=Administrator.
2c Check Create Default Configuration Objects to create the basic configuration
objects on the server to enable user management via LDAP.
2d If your client machine should act as a file server for home directories across
your network, check Home Directories on This Machine.
2e Use the Password Policy section to select, add, delete, or modify the password
policy settings to use. The configuration of password policies with YaST is
part of the LDAP server setup.
2f Click Accept to leave the Advanced Configuration then Finish to apply your
settings.
Use Configure User Management Settings to edit entries on the LDAP server. Access
to the configuration modules on the server is then granted according to the ACLs and
ACIs stored on the server. Follow the procedures outlined in Section 26.4.2, Configuring the YaST Group and User Administration Modules (page 423).
423
The dialog for module configuration (Figure 26.5, YaST: Module Configuration
(page 424)) allows the creation of new modules, selection and modification of existing
configuration modules, and design and modification of templates for such modules.
To create a new configuration module, proceed as follows:
1 Click New and select the type of module to create. For a user configuration
module, select suseuserconfiguration and for a group configuration
choose susegroupconfiguration.
2 Choose a name for the new template. The content view then features a table
listing all attributes allowed in this module with their assigned values. Apart from
all set attributes, the list also contains all other attributes allowed by the current
schema but currently not used.
3 Accept the preset values or adjust the defaults to use in group and user configuration by selecting the respective attribute, pressing Edit, and entering the new
value. Rename a module by simply changing the cn attribute of the module.
Clicking Delete deletes the currently selected module.
4 After you click Accept, the new module is added to the selection menu.
424
Reference
The YaST modules for group and user administration embed templates with sensible
standard values. To edit a template associated with a configuration module, proceed
as follows:
1 In the Module Configuration dialog, click Configure Template.
2 Determine the values of the general attributes assigned to this template according
to your needs or leave some of them empty. Empty attributes are deleted on the
LDAP server.
3 Modify, delete, or add new default values for new objects (user or group configuration objects in the LDAP tree).
Figure 26.6 YaST: Configuration of an Object Template
425
TIP
The default values for an attribute can be created from other attributes by
using a variable instead of an absolute value. For example, when creating a
new user, cn=%sn %givenName is created automatically from the attribute
values for sn and givenName.
Once all modules and templates are configured correctly and ready to run, new groups
and users can be registered in the usual way with YaST.
426
Reference
3d Enter the Plug-Ins tab, select the LDAP plug-in, and click Launch to configure additional LDAP attributes assigned to the new user (see Figure 26.7,
YaST: Additional LDAP Settings (page 427)).
4 Click Accept to apply your settings and leave the user configuration.
Figure 26.7 YaST: Additional LDAP Settings
The initial input form of user administration offers LDAP Options. This gives the possibility to apply LDAP search filters to the set of available users or go to the module
for the configuration of LDAP users and groups by selecting LDAP User and Group
Configuration.
427
428
Reference
4 To view any of the entries in detail, select it in the LDAP Tree view and open
the Entry Data tab.
All attributes and values associated with this entry are displayed.
Figure 26.9 Browsing the Entry Data
5 To change the value of any of these attributes, select the attribute, click Edit,
enter the new value, click Save, and provide the RootDN password when
prompted.
6 Leave the LDAP browser with Close.
429
/etc/openldap/schema/core.schema
/etc/openldap/schema/cosine.schema
/etc/openldap/schema/inetorgperson.schema
/etc/openldap/schema/rfc2307bis.schema
/etc/openldap/schema/yast.schema
These two files contain the PID (process ID) and some of the arguments the slapd
process is started with. There is no need for modifications here.
Example 26.4 slapd.conf: Access Control
# Sample Access Control
#
Allow read access of root DSE
# Allow self write access
#
Allow authenticated users read access
#
Allow anonymous users to authenticate
# access to dn="" by * read
access to * by self write
by users read
by anonymous auth
#
# if no access controls are present, the default is:
#
Allow read by all
#
# rootdn can always write!
Example 26.4, slapd.conf: Access Control (page 430) is the excerpt from slapd
.conf that regulates the access permissions for the LDAP directory on the server. The
settings made here in the global section of slapd.conf are valid as long as no custom
access rules are declared in the database-specific section. These would overwrite the
global declarations. As presented here, all users have read access to the directory, but
430
Reference
only the administrator (rootdn) can write to this directory. Access control regulation
in LDAP is a highly complex process. The following tips can help:
Every access rule has the following structure:
access to <what> by <who> <access>
what is a placeholder for the object or attribute to which access is granted. Individual directory branches can be protected explicitly with separate rules. It is also
possible to process regions of the directory tree with one rule by using regular expressions. slapd evaluates all rules in the order in which they are listed in the
configuration file. More general rules should be listed after more specific onesthe
first rule slapd regards as valid is evaluated and all following entries are ignored.
who determines who should be granted access to the areas determined with what.
Regular expressions may be used. slapd again aborts the evaluation of who after
the first match, so more specific rules should be listed before the more general ones.
The entries shown in Table 26.2, User Groups and Their Access Grants (page 431)
are possible.
Table 26.2
Tag
Scope
anonymous
users
Authenticated users
self
dn.regex=<regex>
access specifies the type of access. Use the options listed in Table 26.3, Types
of Access (page 432).
431
Table 26.3
Types of Access
Tag
Scope of Access
none
No access
auth
compare
search
read
Read access
write
Write access
slapd compares the access right requested by the client with those granted in
slapd.conf. The client is granted access if the rules allow a higher or equal
right than the requested one. If the client requests higher rights than those declared
in the rules, it is denied access.
Example 26.5, slapd.conf: Example for Access Control (page 432) shows an example
of a simple access control that can be arbitrarily developed using regular expressions.
Example 26.5 slapd.conf: Example for Access Control
access to dn.regex="ou=([^,]+),dc=example,dc=com"
by dn.regex="cn=Administrator,ou=$1,dc=example,dc=com" write
by user read
by * none
This rule declares that only its respective administrator has write access to an individual
ou entry. All other authenticated users have read access and the rest of the world has
no access.
TIP: Establishing Access Rules
If there is no access to rule or no matching by directive, access is denied.
Only explicitly declared access rights are granted. If no rules are declared at
all, the default principle is write access for the administrator and read access
for the rest of the world.
432
Reference
Find detailed information and an example configuration for LDAP access rights in the
online documentation of the installed openldap2 package.
Apart from the possibility to administer access permissions with the central server
configuration file (slapd.conf), there is access control information (ACI). ACI allows
storage of the access information for individual objects within the LDAP tree. This type
of access control is not yet common and is still considered experimental by the developers. Refer to http://www.openldap.org/faq/data/cache/758.html
for information.
The type of database, a Berkeley database in this case, is set in the first line of
this section (see Example 26.6, slapd.conf: Database-Specific Directives
(page 433)).
suffix determines for which portion of the LDAP tree this server should be
responsible.
checkpoint determines the amount of data (in KB) that is kept in the transaction
log before it is written to the actual database and the time (in minutes) between
two write actions.
LDAPA Directory Service
433
rootdn determines who owns administrator rights to this server. The user declared
here does not need to have an LDAP entry or exist as regular user.
The directory directive indicates the directory in the file system where the
database directories are stored on the server.
Custom Access rules defined here for the database are used instead of the global
Access rules.
434
Reference
automatically on boot and halt of the system. It is also possible to create the corresponding links to the start and stop scripts with the insserv command from a command
prompt as described in Section 12.2.2, Init Scripts (page 186).
435
Save the file with the .ldif suffix then pass it to the server with the following command:
ldapadd -x -D <dn of the administrator> -W -f <file>.ldif
-x switches off the authentication with SASL in this case. -D declares the user that
calls the operation. The valid DN of the administrator is entered here just like it has
been configured in slapd.conf. In the current example, this is
cn=Administrator,dc=example,dc=com. -W circumvents entering the password on the command line (in clear text) and activates a separate password prompt.
This password was previously determined in slapd.conf with rootpw. The -f
option passes the filename. See the details of running ldapadd in Example 26.8,
ldapadd with example.ldif (page 436).
Example 26.8 ldapadd with example.ldif
ldapadd -x -D cn=Administrator,dc=example,dc=com -W -f example.ldif
Enter LDAP
adding new
adding new
adding new
adding new
436
Reference
password:
entry "dc=example,dc=com"
entry "ou=devel,dc=example,dc=com"
entry "ou=doc,dc=example,dc=com"
entry "ou=it,dc=example,dc=com"
The user data of individuals can be prepared in separate LDIF files. Example 26.9,
LDIF Data for Tux (page 437) adds Tux to the new LDAP directory.
Example 26.9 LDIF Data for Tux
# coworker Tux
dn: cn=Tux Linux,ou=devel,dc=example,dc=com
objectClass: inetOrgPerson
cn: Tux Linux
givenName: Tux
sn: Linux
mail: [email protected]
uid: tux
telephoneNumber: +49 1234 567-8
An LDIF file can contain an arbitrary number of objects. It is possible to pass entire
directory branches to the server at once or only parts of it as shown in the example of
individual objects. If it is necessary to modify some data relatively often, a fine subdivision of single objects is recommended.
Import the modified file into the LDAP directory with the following command:
ldapmodify -x -D cn=Administrator,dc=example,dc=com -W -f tux.ldif
437
ldapmodify -x -D cn=Administrator,dc=example,dc=com -W
Enter LDAP password:
2 Enter the changes while carefully complying with the syntax in the order presented
below:
dn: cn=Tux Linux,ou=devel,dc=example,dc=com
changetype: modify
replace: telephoneNumber
telephoneNumber: +49 1234 567-10
Find detailed information about ldapmodify and its syntax in the ldapmodify
man page.
The -b option determines the search basethe section of the tree within which the
search should be performed. In the current case, this is dc=example,dc=com. To
perform a more finely-grained search in specific subsections of the LDAP directory
(for example, only within the devel department), pass this section to ldapsearch
with -b. -x requests activation of simple authentication. (objectClass=*) declares
that all objects contained in the directory should be read. This command option can be
used after the creation of a new directory tree to verify that all entries have been
recorded correctly and the server responds as desired. Find more information about the
use of ldapsearch in the corresponding man page (ldapsearch(1)).
438
Reference
439
Understanding LDAP
A detailed general introduction to the basic principles of LDAP: http://www
.redbooks.ibm.com/redbooks/pdfs/sg244986.pdf.
Printed literature about LDAP:
LDAP System Administration by Gerald Carter (ISBN 1-56592-491-6)
Understanding and Deploying LDAP Directory Services by Howes, Smith, and
Good (ISBN 0-672-32316-8)
The ultimate reference material for the subject of LDAP is the corresponding RFCs
(request for comments), 2251 to 2256.
440
Reference
27
Active Directory* (AD) is a directory service based on LDAP, Kerberos, and other
services that is used by Microsoft Windows to manage resources, services, and people.
In an MS Windows network, AD provides information about these objects, restricts
access to any of them, and enforces policies. openSUSE lets you join existing AD
domains and integrate your Linux machine into a Windows environment.
441
442
Reference
kerberized
apps
NSS
PAM
Kerberos
Credential
pam_winbind
Cache
nscd
pam_unix2
nss_compat
nss_winbind
pam_mkhomedir
Kerberos
winbindd
LDAP, Kerberos
Offline Cache
Windows DC
(Active Directory)
To communicate with the directory service, the client needs to share at least two protocols with the server:
LDAP
LDAP is a protocol optimized for managing directory information. A Windows
domain controller with AD can use the LDAP protocol to exchange directory information with the clients. To learn more about LDAP in general and about the open
source port of it, OpenLDAP, refer to Chapter 26, LDAPA Directory Service
(page 409).
443
Kerberos
Kerberos is a third-party trusted authentication service. All its clients trust Kerberos's
judgment of another client's identity, enabling kerberized single-sign-on (SSO)
solutions. Windows supports a Kerberos implementation, making Kerberos SSO
possible even with Linux clients.
The following client components process account and authentication data:
Winbind
The most central part of this solution is the winbind daemon that is a part of the
Samba project and handles all communication with the AD server.
NSS (Name Service Switch)
NSS routines provide name service information. Naming service for both users
and groups is provided by nss_winbind. This module directly interacts with
the winbind daemon.
PAM (Pluggable Authentication Modules)
User authentication for AD users is done by the pam_winbind module. The
creation of user homes for the AD users on the Linux client is handled by pam
_mkhomedir. The pam_winbind module directly interacts with winbindd. To
learn more about PAM in general, refer to Chapter 18, Authentication with PAM
(page 263).
Applications that are PAM-aware, like the login routines and the GNOME and KDE
display managers, interact with the PAM and NSS layer to authenticate against the
Windows server. Applications supporting Kerberos authentication, such as file managers,
Web browsers, or e-mail clients, use the Kerberos credential cache to access user's
Kerberos tickets, making them part of the SSO framework.
444
Reference
1 The Windows domain controller providing both LDAP and KDC (Key Distribution Center) services is located.
2 A machine account for the joining client is created in the directory service.
3 An initial ticket granting ticket (TGT) is obtained for the client and stored in its
local Kerberos credential cache. The client needs this TGT to get further tickets
allowing it to contact other services, like contacting the directory server for LDAP
queries.
4 NSS and PAM configurations are adjusted to enable the client to authenticate
against the domain controller.
During client boot, the winbind daemon is started and retrieves the initial Kerberos
ticket for the machine account. winbindd automatically refreshes the machine's ticket
to keep it valid. To keep track of the current account policies, winbindd periodically
queries the domain controller.
445
Account disabled
The user sees an error message stating that his account has been disabled and that
he should contact the system administrator.
Account locked out
The user sees an error message stating that his account has been locked and that
he should contact the system administrator.
Password has to be changed
The user can log in but receives a warning that the password needs to be changed
soon. This warning is sent three days before that password expires. After expiration,
the user cannot login again.
Invalid workstation
When a user is just allowed to log in from specific workstations and the current
openSUSE machine is not in that list, a message appears that this user cannot log
in from this workstation.
Invalid logon hours
When a user is only allowed to log in during working hours and tries to log in
outside working hours, a message shows that login is not possible at this point in
time.
Account expired
An administrator can set an expiration time for a specific user account. If that user
tries to log in after that time has passed, the user gets a message that the account
has expired and cannot be used to log in.
During a successful authentication, pam_winbind acquires a ticket granting ticket
(TGT) from the Kerberos server of Active Directory and stores it in the user's credential
cache. It also takes care of renewing the TGT in the background, not requiring any user
interaction.
openSUSE supports local home directories for AD users. If configured through YaST
as described in Section 27.3, Configuring a Linux Client for Active Directory
(page 447), user homes are created at the first login of a Windows (AD) user into the
Linux client. These home directories look and feel entirely the same as standard Linux
user home directories and work independently of the AD domain controller. Using a
local user home, it is possible to access a user's data on this machine, even when the
446
Reference
AD server is disconnected, if the Linux client has been configured to perform offline
authentication.
447
troller). If the clockskew between your Linux host and the domain controller exceeds
a certain limit, Kerberos authentication fails and the client is logged in only using
the weaker NTLM (NT LAN Manager) authentication.
DHCP
If your client uses dynamic network configuration with DHCP, configure DHCP
to provide the same IP and hostname to the client. If possible, use static IP addresses
to be on the safe side.
Firewall
To browse your network neighborhood, either disable the firewall entirely or mark
the interface used for browsing as part of the internal zone.
To change the firewall settings on your client, log in as root and start the YaST
firewall module. Select Interfaces. Select your network interface from the list of
interfaces and click Change. Select Internal Zone and apply your settings with OK.
Leave the firewall settings with Next > Accept. To disable the firewall, just set
Service Start to Manually and leave the firewall module with Next > Accept.
AD Account
You cannot log in to an AD domain unless the AD administrator has provided you
with a valid user account for this domain. Use the AD username and password to
log in to the AD domain from your Linux client.
Join an existing AD domain during installation or by later activating SMB user authentication with YaST in the installed system.
NOTE
Currently only a domain administrator account, such as Administrator, can
join openSUSE into Active Directory.
To join an AD domain in a running system, proceed as follows:
1 Log in as root and start YaST.
2 Start Network Services > Windows Domain Membership.
3 Enter the domain to join at Domain or Workgroup in the Windows Domain
Membership screen (see Figure 27.2, Determining Windows Domain Mem-
448
Reference
bership (page 449)). If the DNS settings on your host are properly integrated
with the Windows DNS server, enter the AD domain name in its DNS format
(mydomain.mycompany.com). If you enter the short name of your domain
(also known as the preWindows 2000 domain name), YaST must rely on
NetBIOS name resolution instead of DNS to find the correct domain controller.
To select from a list of available domains instead, use Browse to list the NetBIOS domains then select the desired domain.
Figure 27.2 Determining Windows Domain Membership
4 Check Also Use SMB Information for Linux Authentication to use the SMB
source for Linux authentication.
5 Check Create Home Directory on Login to automatically create a local home
directory for your AD user on the Linux machine.
6 Check Offline Authentication to allow your domain users to log in even if the
AD server is temporarily unavailable or you do not have a network connection.
7 Select Expert Settings, if you want to change the UID and GID ranges for the
Samba users and groups. Let DHCP retrieve the WINS server (default setting).
8 Configure NTP time synchronization for your AD environment by selecting
NTP Configuration and entering an appropriate server name or IP address.
449
This step is obsolete if you have already entered the appropriate settings in
the standalone YaST NTP configuration module.
9 Click Finish and confirm the domain join when prompted for it.
10 Provide the password for the Windows administrator on the AD server and
click OK (see Figure 27.3, Providing Administrator Credentials (page 450)).
Figure 27.3 Providing Administrator Credentials
After you have joined the AD domain, you can log in to it from your workstation using
the display manager of your desktop or the console.
450
Reference
451
452
Reference
To change your Windows password from the GNOME desktop, proceed as follows:
1 Click the Computer icon on the left edge of the panel.
2 Select Control Center.
3 From the Personal section, select Change Password.
4 Enter your old password.
5 Enter and confirm the new password.
6 Leave the dialog with Close to apply your settings.
To change your Windows password from the KDE desktop, proceed as follows:
1 Select Personal Settings from the main menu.
2 Select Security & Privacy.
3 Click Password & User Account.
4 Click Change Password.
5 Enter your current password.
6 Enter and confirm the new password and apply your settings with OK.
7 Leave the Personal Settings with File > Quit.
453
28
One of the possibilities to distribute files over the network is NFS (Network File System).
NFS works together with network information services like NIS (see also Chapter 25,
Using NIS (page 401) or a directory service like LDAP (see also Chapter 26, LDAPA
Directory Service (page 409) to handle the information about how to use the available
services. To prevent unauthorized access, NFSv4 also makes it possible to use authentication with Kerberos (see also Chapter 39, Installing and Administering Kerberos
(page 621). When configured correctly, it does not matter at which terminal users are
logged in, they always find themselves in the same environment.
Like NIS, NFS is a client/server system. A machine can be bothit can supply file
systems over the network (export) and mount file systems from other hosts (import).
In principle, all exports can be made using IP addresses only. To avoid time-outs,
however, you should have a working DNS system. This is necessary at least for logging
purposes, because the mounted daemon does reverse lookups.
All networked services heavily rely on a correct system time. If you intend to setup
such services, one of the first things that should be configured is the time synchronization
as described in Chapter 24, Time Synchronization with NTP (page 393).
455
456
Reference
If the /home directory from the machine nfs.example.com, should be imported, first
create a local directory /home and then use the following command:
mount nfs.example.com:/home /home
Replace host with the NFS server that hosts one or more NFSv4 exports and
local-path with the directory location in the client machine where this should be
mounted. For example, to import /home exported with NFSv4 on nfs.example.com to
/local/home, use the following command:
mount -t nfs4 nfs.example.com:/ /local
457
Note, that the remote file system path that follows the server name and a colon is always
a slash /. This is unlike the way it is specified for v3 imports, where the exact path
of the remote file system is given. This is a concept called pseudo file system, which is
explained in Section 28.2, Exporting File Systems over NFS (page 459).
Now the /nfsmounts directory acts as a root for all the NFS mounts on the client if
the auto.nfs file is completed appropriately. The name auto.nfs is chosen for
sake of convenienceyou can choose any name. In the selected file (create it if it does
not exist), add entries for all the NFS mounts as in the following example:
localdata -fstype=nfs server1:/data
nfs4mount -fstype=nfs4 server2:/
Make sure that auto.nfs is executable with the command chmod 755 auto.nfs.
Then activate the settings with rcautofs start. For this example, /nfsmounts/
localdata, the /data directory of server1, is then mounted with NFS and
/nfsmounts/nfs4mount from server2 is mounted with NFSv4.
If the /etc/auto.master file is edited while the service autofs is running, the automounter must be restarted for the changes to take effect. Do this with rcautofs
restart.
NFSv4 mounts may also be added to the /etc/fstab file manually. For these mounts,
use nfs4 instead of nfs in the third column and make sure that the remote file system
is given as / after the nfs.example.com: in the first column. A sample line for an
NFSv4 mount in /etc/fstab looks like this:
458
Reference
The noauto option prevents the file system from being mounted automatically at start
up. If you want to mount the respective file system manually, it is possible to shorten
the command for mounting and it is only needed to provide the mount point as in:
mount /local/path
Note, that if you do not enter the noauto option, the initialization scripts of the system
will handle the mount of those file systems at start up.
459
Next, activate NFS Server: Start. If you intend to use NFSv4, activate Enable NFSv4
and enter the NFSv4 domain name. The NFSv4 domain name must be identical with
the one used on all clients that should connect to the server.
Click Enable GSS Security if you need secure access to the server. A prerequisite for
this is to have Kerberos installed in your domain and both the server and the clients are
kerberized. Click Next.
In the upper text field, enter the directories to export. Below, enter the hosts that should
have access to the respective directory. This dialog is shown in Figure 28.3, Configuring
an NFS Server with YaST (page 461). The figure shows the scenario where NFSv4 is
enabled in the previous dialog.
For a fixed set of clients, there are two types of directories that can be exporteddirectories that act as pseudo root file systems and those that are bound to some subdirectory
of the pseudo file system. This pseudo file system acts as a base point under which all
file systems exported for the same client set take their place. For a client or set of clients,
only one directory on the server can be configured as pseudo root for export. For this
same client, export multiple directories by binding them to some existing subdirectory
in the pseudo root.
After adding a directory in the upper half, another dialog for entering the client and
option information pops up automatically. Later on, to add a new client (client set),
click Add Host.
In the small dialog that opens, enter the host wild card. There are four possible types
of host wild cards that can be set for each host: a single host (name or IP address), netgroups, wild cards (such as * indicating all machines can access the server), and IP
networks. Then, in Options, include fsid=0 in the comma-separated list of options
to configure the directory as pseudo root. If this directory should be bound to another
directory under an already configured pseudo root, make sure that a target bind path is
given in the option list with bind=/target/path.
For example, suppose that the directory /exports is chosen as the pseudo root directory for all the clients that can access the server. Then add this in the upper half and
make sure that the options entered for this directory include fsid=0. If there is another
directory, /data, that also needs to be NFSv4 exported, add this directory to the upper
half. While entering options for this, make sure that bind=/exports/data is in
the list and that /exports/data is an already existing subdirectory of /exports.
Any change in the option bind=/target/path, whether addition, deletion, or
460
Reference
change in value, is reflected in Bindmount Targets. This column is not directly editable,
instead it contains summarizing directories and their nature. After the information is
complete, click Finish to complete the configuration or Start to restart the service.
For more information about options available regarding the directory export, refer to
the manual page of Export. Click Finish to complete the configuration.
Figure 28.3 Configuring an NFS Server with YaST
461
wild cards that can be set for each host: a single host (name or IP address), netgroups,
wild cards (such as * indicating all machines can access the server), and IP networks.
This dialog is shown in Figure 28.4, Exporting Directories with NFSv2 and v3
(page 462). Find a more thorough explanation of these options in man exports. Click
Finish to complete the configuration.
Figure 28.4 Exporting Directories with NFSv2 and v3
462
Reference
IMPORTANT
Automatic Firewall Configuration
If SuSEfirewall2 is active on your system, YaST adapts its configuration for the
NFS server by enabling service when Open Ports in Firewall is selected.
For example:
/export 192.168.1.2(rw,fsid=0,sync)
/data 192.168.1.2(rw,bind=/export/data,sync)
Those directories for which fsid=0 is specified in the option list are called pseudo
root file systems. Here, the IP address 192.168.1.2 is used. You can use the name
of the host, a wild card indicating a set of hosts (*.abc.com, *, etc.), or netgroups.
463
For a fixed set of clients, there are only two types of directories that can be NFSv4 exported:
A single directory that is chosen as the pseudo root file system. In this example,
/export is the pseudo root directory because fsid=0 is specified in the option
list for this entry.
Directories that are chosen to be bound to some an existing subdirectory of the
pseudo file system. In the example entries above, /data is such a directory that
binds to an existing subdirectory (/export/data) of the pseudo file system
/export.
The pseudo file system is the top level directory under which all file systems that need
to be NFSv4 exported take their places. For a client or set of clients, there can only be
one directory on the server configured as the pseudo root for export. For this same client
or client set, multiple other directories can be exported by binding them to some existing
subdirectory in the pseudo root.
/etc/sysconfig/nfs
This file contains a few parameters that determine NFSv4 server daemon behavior.
Importantly, the parameter NFSv4_SUPPORT must be set to yes. This parameter determines whether the NFS server supports NFSv4 exports and clients.
/etc/idmapd.conf
Every user on a Linux machine has a name and ID. idmapd does the name-to-ID mapping
for NFSv4 requests to the server and replies to the client. This must be running on both
server and client for NFSv4, because NFSv4 uses only names in its communication.
Make sure that there is a uniform way in which usernames and IDs (uid) are assigned
to users across machines that might probably be sharing file systems using NFS. This
can be achieved by using NIS, LDAP, or any uniform domain authentication mechanism
in your domain.
For proper function, the parameter Domain must be set the same for both client and
server in this file. If you are not sure, leave the domain as localdomain in both
server and client files. A sample configuration file looks like the following:
464
Reference
[General]
Verbosity = 0
Pipefs-Directory = /var/lib/nfs/rpc_pipefs
Domain = localdomain
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody
Do not change these parameters unless you are sure of what you are doing. For further
reference, read the man page of idmapd and idmapd.conf; man idmapd, man
idmapd.conf .
For example:
/export 192.168.1.2(rw,sync)
465
Here, the directory /export is shared with the host 192.168.1.2 with the option list
rw,sync. This IP address can be replaced with a client name or set of clients using a
wild card (such as *.abc.com) or even netgroups.
For a detailed explanation of all options and their meanings, refer to the man page of
exports (man exports).
After changing /etc/exports or /etc/sysconfig/nfs, start or restart the
NFS server using the command rcnfsserver restart.
466
Reference
467
29
Samba
Using Samba, a Unix machine can be configured as a file and print server for DOS,
Windows, and OS/2 machines. Samba has developed into a fully-fledged and rather
complex product. Configure Samba with YaST, SWAT (a Web interface), or the configuration file.
29.1 Terminology
The following are some terms used in Samba documentation and in the YaST module.
SMB protocol
Samba uses the SMB (server message block) protocol that is based on the NetBIOS
services. Due to pressure from IBM, Microsoft released the protocol so other software manufacturers could establish connections to a Microsoft domain network.
With Samba, the SMB protocol works on top of the TCP/IP protocol, so the TCP/IP
protocol must be installed on all clients.
CIFS protocol
CIFS (common Internet file system) protocol is another protocol supported by
Samba. CIFS defines a standard remote file system access protocol for use over
the network, enabling groups of users to work together and share documents across
the network.
NetBIOS
NetBIOS is a software interface (API) designed for communication between machines. Here, a name service is provided. It enables machines connected to the
Samba
469
network to reserve names for themselves. After reservation, these machines can be
addressed by name. There is no central process that checks names. Any machine
on the network can reserve as many names as it wants as long as the names are not
already in use. The NetBIOS interface can now be implemented for different network architectures. An implementation that works relatively closely with network
hardware is called NetBEUI, but this is often referred to as NetBIOS. Network
protocols implemented with NetBIOS are IPX from Novell (NetBIOS via TCP/IP)
and TCP/IP.
The NetBIOS names sent via TCP/IP have nothing in common with the names
used in /etc/hosts or those defined by DNS. NetBIOS uses its own, completely
independent naming convention. However, it is recommended to use names that
correspond to DNS hostnames to make administration easier. This is the default
used by Samba.
Samba server
Samba server is a server that provides SMB/CIFS services and NetBIOS over IP
naming services to clients. For Linux, there are two daemons for Samba server:
smnd for SMB/CIFS services and nmbd for naming services.
Samba client
Samba client is a system that uses Samba services from a Samba server over the
SMB protocol. All common operating systems, such as Mac OS X, Windows, and
OS/2, support the SMB protocol. The TCP/IP protocol must be installed on all
computers. Samba provides a client for the different UNIX flavors. For Linux,
there is a kernel module for SMB that allows the integration of SMB resources on
the Linux system level. You do not need run any daemon for Samba client.
Shares
SMB servers provide hardware space to their clients by means of shares. Shares
are printers and directories with their subdirectories on the server. It is exported
by means of a name and can be accessed by its name. The share name can be set
to any nameit does not have to be the name of the export directory. A printer is
also assigned a name. Clients can access the printer by its name.
470
Reference
Samba
471
Shares
In the Shares tab, determine the Samba shares to activate. There are some predefined
shares, like homes and printers. Use Toggle Status to switch between Active and Inactive.
Click Add to add new shares and Delete to delete the selected share.
472
Reference
Identity
In the Identity tab, you can determine the domain with which the host is associated
(Base Settings) and whether to use an alternative hostname in the network (NetBIOS
Host Name). To set expert global settings or set user authentication, click Advanced
Settings.
Samba
473
workgroup = TUX-NET
This line assigns the Samba server to a workgroup. Replace TUX-NET with an
appropriate workgroup of your networking environment. Your Samba server appears
under its DNS name unless this name has been assigned to any other machine in
the network. If the DNS name is not available, set the server name using
netbiosname=MYNAME. See mansmb.conf for more details about this parameter.
os level = 2
This parameter triggers whether your Samba server tries to become LMB (local
master browser) for its workgroup. Choose a very low value to spare the existing
Windows network from any disturbances caused by a misconfigured Samba server.
More information about this important topic can be found in the files
BROWSING.txt and BROWSING-Config.txt under the textdocs subdirectory of the package documentation.
If no other SMB server is present in your network (such as a Windows NT or 2000
server) and you want the Samba server to keep a list of all systems present in the
local environment, set the os level to a higher value (for example, 65). Your
Samba server is then chosen as LMB for your local network.
When changing this setting, consider carefully how this could affect an existing
Windows network environment. First test the changes in an isolated network or at
a noncritical time of day.
wins support and wins server
To integrate your Samba server into an existing Windows network with an active
WINS server, enable the wins server option and set its value to the IP address
of that WINS server.
If your Windows machines are connected to separate subnets and should still be
aware of each other, you need to set up a WINS server. To turn a Samba server
into such a WINS server, set the option wins support = Yes. Make sure that
only one Samba server of the network has this setting enabled. The options wins
server and wins support must never be enabled at the same time in your
smb.conf file.
474
Reference
Shares
The following examples illustrate how a CD-ROM drive and the user directories
(homes) are made available to the SMB clients.
[cdrom]
To avoid having the CD-ROM drive accidentally made available, these lines are
deactivated with comment marks (semicolons in this case). Remove the semicolons
in the first column to share the CD-ROM drive with Samba.
Example 29.1 A CD-ROM Share (deactivated)
;[cdrom]
;
comment = Linux CD-ROM
;
path = /media/cdrom
;
locking = No
Samba
475
[homes]
As long as there is no other share using the share name of the user connecting
to the SMB server, a share is dynamically generated using the [homes] share
directives. The resulting name of the share is the username.
valid users = %S
%S is replaced with the concrete name of the share as soon as a connection has
been successfully established. For a [homes] share, this is always the username. As a consequence, access rights to a user's share are restricted exclusively
to the user.
browseable = No
This setting makes the share invisible in the network environment.
read only = No
By default, Samba prohibits write access to any exported share by means of
the read only = Yes parameter. To make a share writable, set the value
read only = No, which is synonymous with writable = Yes.
create mask = 0640
Systems that are not based on MS Windows NT do not understand the concept
of UNIX permissions, so they cannot assign permissions when creating a file.
The parameter create mask defines the access permissions assigned to
newly created files. This only applies to writable shares. In effect, this setting
means the owner has read and write permissions and the members of the
owner's primary group have read permissions. valid users = %S prevents
read access even if the group has read permissions. For the group to have read
or write access, deactivate the line valid users = %S.
476
Reference
Security Levels
To improve security, each share access can be protected with a password. SMB has
three possible ways of checking the permissions:
Share Level Security (security = share)
A password is firmly assigned to a share. Everyone who knows this password has
access to that share.
User Level Security (security = user)
This variation introduces the concept of the user to SMB. Each user must register
with the server with his own password. After registration, the server can grant access
to individual exported shares dependent on usernames.
Server Level Security (security = server):
To its clients, Samba pretends to be working in user level mode. However, it
passes all password queries to another user level mode server, which takes care of
authentication. This setting expects an additional parameter (password server).
The selection of share, user, or server level security applies to the entire server. It is
not possible to offer individual shares of a server configuration with share level security
and others with user level security. However, you can run a separate Samba server for
each configured IP address on a system.
More information about this subject can be found in the Samba HOWTO Collection.
For multiple servers on one system, pay attention to the options interfaces and
bind interfaces only.
477
selected with the mouse. If you activate Also Use SMB Information for Linux Authentication, the user authentication runs over the Samba server. After completing all settings,
click Finish to finish the configuration.
If encrypted passwords are used for verification purposesthis is the default setting
with well-maintained MS Windows 9x installations, MS Windows NT 4.0 from service
pack 3, and all later productsthe Samba server must be able to handle these. The entry
encrypt passwords = yes in the [global] section enables this (with Samba
version 3, this is now the default). In addition, it is necessary to prepare user accounts
478
Reference
and passwords in an encryption format that conforms with Windows. Do this with the
command smbpasswd -a name. Create the domain account for the computers, required by the Windows NT domain concept, with the following commands:
Example 29.4 Setting Up a Machine Account
useradd hostname\$
smbpasswd -a -m hostname
With the useradd command, a dollar sign is added. The command smbpasswd inserts
this automatically when the parameter -m is used. The commented configuration example
(/usr/share/doc/packages/Samba/examples/smb.conf.SuSE) contains
settings that automate this task.
Example 29.5 Automated Setup of a Machine Account
add machine script = /usr/sbin/useradd -g nogroup -c "NT Machine Account" \
-s /bin/false %m\$
To make sure that Samba can execute this script correctly, choose a Samba user with
the required administrator permissions. To do so, select one user and add it to the
ntadmin group. After that, all users belonging to this Linux group can be assigned
Domain Admin status with the command:
net groupmap add ntgroup="Domain Admins" unixgroup=ntadmin
More information about this topic is provided in Chapter 12 of the Samba HOWTO
Collection, found in /usr/share/doc/packages/samba/
Samba-HOWTO-Collection.pdf.
Samba
479
The Samba HOWTO Collection provided by the Samba team includes a section about
troubleshooting. In addition to that, Part V of the document provides a step-by-step
guide to checking your configuration. You can find Samba HOWTO Collection in
/usr/share/doc/packages/samba/Samba-HOWTO-Collection.pdf
after installing the package samba-doc.
Find detailed information about LDAP and migration from Windows NT or 2000 in
/usr/share/doc/packages/samba/examples/LDAP/
smbldap-tools-*/doc, where * is your smbldap-tools version.
480
Reference
30
With a share of more than 70%, the Apache HTTP Server (Apache) is the world's most
widely-used Web server according to the Survey from http://www.netcraft
.com/. Apache, developed by the Apache Software Foundation (http://www
.apache.org/), is available for most operating systems. openSUSE includes
Apache version 2.2. In this chapter, learn how to install, configure and set up a Web
server; how to use SSL, CGI, and additional modules; and how to troubleshoot Apache.
30.1.1 Requirements
Make sure that the following requirements are met before trying to set up the Apache
Web server:
1. The machine's network is configured properly. For more information about this
topic, refer to Chapter 20, Basic Networking (page 299).
2. The machine's exact system time is maintained by synchronizing with a time
server. This is necessary because parts of the HTTP protocol depend on the correct
time. See Chapter 24, Time Synchronization with NTP (page 393) to learn more
about this topic.
481
3. The latest security updates are installed. If in doubt, run a YaST Online Update.
4. The default Web server port (port 80) is opened in the firewall. For this, configure
the SUSEFirewall2 to allow the service HTTP Server in the external zone. This
can be done using YaST. Section 35.4.1, Configuring the Firewall with YaST
(page 583) gives details.
30.1.2 Installation
Apache on openSUSE is not installed by default. To install it, start YaST and select
Software > Software Management. Now choose Filter > Patterns and select Web and
LAMP Server under Server Functions. Confirm the installation of the dependent packages
to finish the installation process.
Apache is installed with a standard, predefined configuration that runs out of the box.
The installation includes the multiprocessing module apache2-prefork as well
the PHP5 module. Refer to Section 30.4, Installing, Activating, and Configuring
Modules (page 499) for more information about modules.
30.1.3 Start
To start Apache and make sure that it is automatically started during boot, start YaST
and select System > System Services (Runlevel). Search for apache2 and Enable the
service. The Web server starts immediately. By saving your changes with Finish, the
system is configured to automatically start Apache in runlevels 3 and 5 during boot.
For more information about the runlevels in openSUSE and a description of the YaST
runlevel editor, refer to Section 12.2.3, Configuring System Services (Runlevel) with
YaST (page 190).
To start Apache using the shell, run rcapache2 start. To make sure that Apache
is automatically started during boot in runlevels 3 and 5, use chkconfig -a
apache2.
If you have not received error messages when starting Apache, the Web server should
be running now. Start a browser and open http://localhost/. You should see
an Apache test page starting with If you can see this, it means that the installation of
the Apache Web server software on this system was successful. If you do not see this
page, refer to Section 30.8, Troubleshooting (page 517).
482
Reference
Now that the Web server is running, you can add your own documents, adjust the configuration according to your needs, or add functionality by installing modules.
Configuration Files
Apache configuration files can be found in two different locations:
/etc/sysconfig/apache2
/etc/apache2/
/etc/sysconfig/apache2
/etc/sysconfig/apache2 controls some global settings of Apache, like modules
to load, additional configuration files to include, flags with which the server should be
started, and flags that should be added to the command line. Every configuration option
in this file is extensively documented and therefore not mentioned here. For a general-
483
/etc/apache2/
/etc/apache2/ hosts all configuration files for Apache. In the following, the purpose
of each file is explained. Each file includes several configuration options (also referred
to as directives). Every configuration option in these files is extensively documented
and therefore not mentioned here.
The Apache configuration files are organized as follows:
/etc/apache2/
|
|- charset.conv
|- conf.d/
|
|
|
|- *.conf
|
|- default-server.conf
|- errors.conf
|- httpd.conf
|- listen.conf
|- magic
|- mime.types
|- mod_*.conf
|- server-tuning.conf
|- ssl.*
|- ssl-global.conf
|- sysconfig.d
|
|
|
|- global.conf
|
|- include.conf
|
|- loadmodule.conf . .
|
|- uid.conf
|- vhosts.d
|
|- *.conf
484
Reference
.template for examples. By doing so, you can provide different module sets
for different virtual hosts.
default-server.conf
Global configuration for all virtual hosts with reasonable defaults. Instead of
changing the values, overwrite them with a virtual host configuration.
errors.conf
Defines how Apache responds to errors. To customize these messages for all virtual
hosts, edit this file. Otherwise overwrite these directives in your virtual host configurations.
httpd.conf
The main Apache server configuration file. Avoid changing this file. It mainly
contains include statements and global settings. Overwrite global settings in the
respective configuration files listed here. Change host-specific settings (such as
document root) in your virtual host configuration.
listen.conf
Binds Apache to specific IP addresses and ports. Name-based virtual hosting (see
Section Name-Based Virtual Hosts (page 487) is also configured here.
magic
Data for the mime_magic module that helps Apache automatically determine the
MIME type of an unknown file. Do not change.
mime.types
MIME types known by the system (this actually is a link to /etc/mime.types).
Do not edit. If you need to add MIME types not listed here, add them to mod
_mime-defaults.conf.
mod_*.conf
Configuration files for the modules that are installed by default. Refer to Section 30.4, Installing, Activating, and Configuring Modules (page 499) for details.
Note that configuration files for optional modules reside in the directory conf.d.
server-tuning.conf
Contains configuration directives for the different MPMs (see Section 30.4.4,
Multiprocessing Modules (page 503)) as well as general configuration options
that control Apache's performance. Properly test your Web server when making
changes here.
The Apache HTTP Server
485
486
Reference
487
The opening VirtualHost tag takes the IP address (or fully qualified domain name)
previously declared with the NameVirtualHost as an argument in a name-based
virtual host configuration. A port number previously declared with the
NameVirtualHost directive is optional.
The wild card * is also allowed as a substitute for the IP address. This syntax is only
valid in combination with the wild card usage in NameVirtualHost * . When using
IPv6 addresses, the address must be included in square brackets.
Example 30.2 Name-Based VirtualHost Directives
<VirtualHost 192.168.3.100:80>
...
</VirtualHost>
<VirtualHost 192.168.3.100>
...
</VirtualHost>
<VirtualHost *:80>
...
</VirtualHost>
<VirtualHost *>
...
</VirtualHost>
<VirtualHost [2002:c0a8:364::]>
...
</VirtualHost>
488
Reference
The physical server must have one IP address for each IP-based virtual host. If the
machine does not have multiple network cards, virtual network interfaces (IP aliasing)
can also be used.
The following example shows Apache running on a machine with the IP
192.168.3.100, hosting two domains on the additional IPs 192.168.3.101 and
192.168.3.102. A separate VirtualHost block is needed for every virtual
server.
Example 30.3 IP-Based VirtualHost Directives
<VirtualHost 192.168.3.101>
...
</VirtualHost>
<VirtualHost 192.168.3.102>
...
</VirtualHost>
Here, VirtualHost directives are only specified for interfaces other than
192.168.3.100. When a Listen directive is also configured for
192.168.3.100, a separate IP-based virtual host must be created to answer HTTP
requests to that interfaceotherwise the directives found in the default server configuration (/etc/apache2/default-server.conf) are applied.
489
ErrorLog
The error log file for this virtual host. Although it is not necessary to create separate
error log files for each virtual host, it is common practice to do so, because it makes
debugging of errors much easier. /var/log/apache2/ is the default directory
where Apache's log files should be kept.
CustomLog
The access log file for this virtual host. Although it is not necessary to create separate
access log files for each virtual host, it is common practice to do so, because it allows
separate analysis of access statistics for each host. /var/log/apache2/ is the
default directory where Apache's log files should be kept.
As mentioned above, access to the whole file system is forbidden by default for security
reasons. Therefore, explicitly unlock the directories in which you have placed the files
Apache should servefor example the DocumentRoot:
<Directory "/srv/www/www.example.com/htdocs">
Order allow,deny
Allow from all
</Directory>
Reference
Modules
The Modules configuration option allows for the activation or deactivation of the script
languages, the web server should support. For the activation or deactivation of other
modules, refer to Section Server Modules (page 496). Click Next to advance to the
next dialog.
Default Host
This option pertains to the default Web server. As explained in Section Virtual Host
Configuration (page 486), Apache can serve multiple virtual hosts from a single physical machine. The first declared virtual host in the configuration file is commonly referred
to as the default host. Each virtual host inherits the default host's configuration.
To edit the host settings (also called directives), choose the appropriate entry in the table
then click Edit. To add new directives, click Add. To delete a directive, select it and
click Delete.
491
492
Reference
Directory
With the Directory setting, you can enclose a group of configuration options
that will only apply to the specified directory.
Access and display options for the directories /usr/share/apache2/icons
and /srv/www/cgi-bin are configured here. It should not be necessary to
change the defaults.
Include
With include, additional configuration files can be specified. Two Include directives are already preconfigured: /etc/apache2/conf.d/ is the directory
containing the configuration files that come with external modules. With this directive, all files in this directory ending in .conf are included. With the second directive, /etc/apache2/conf.d/apache2-manual?conf, the
apache2-manual configuration file is included.
Server Name
This specifies the default URL used by clients to contact the Web server. Use a
fully qualified domain name (FQDN) to reach the Web server at http://FQDN/
or its IP address. You cannot choose an arbitrary name herethe server must be
known under this name.
Server Administrator E-Mail
E-mail address of the server administrator. This address is, for example, shown on
error pages Apache creates.
After finishing with the Default Host step, click Next to continue with the configuration.
Virtual Hosts
In this step, the wizard displays a list of already configured virtual hosts (see Section
Virtual Host Configuration (page 486)). If you have not made manual changes prior
to starting the YaST HTTP wizard, no virtual host is present.
To add a host, click Add to open a dialog in which to enter basic information about the
host. Server Identification includes the server name, server contents root
(DocumentRoot), and administrator e-mail. Server Resolution is used to determine
how a host is identified (name based or IP based). Specify the name or IP address with
Change Virtual Host ID
493
Clicking Next advances to the second part of the virtual host configuration dialog.
In part two of the virtual host configuration you can specify whether to enable CGI
scripts and which directory to use for these scripts. It is also possible to enable SSL. If
you do so, you must specify the path to the certificate as well. See Section 30.6.2,
Configuring Apache with SSL (page 514) for details on SSL and certificates. With
the Directory Index option, you can specify which file to display when the client requests
a directory (by default, index.html). Add one or more filenames (space-separated) if
you want to change this. With Enable Public HTML, the content of the users public
directories (~user/public_html/) is made available on the server under
http://www.example.com/~user.
IMPORTANT: Creating Virtual Hosts
It is not possible to add virtual hosts at will. If using name-based virtual hosts,
each hostname must be resolved on the network. If using IP-based virtual hosts,
you can assign only one host to each IP address available.
Summary
This is the final step of the wizard. Here, determine how and when the Apache server
is started: when booting or manually. Also see a short summary of the configuration
made so far. If you are satisfied with your settings, click Finish to complete configuration. If you want to change something, click Back until you have reached the desired
dialog. Clicking HTTP Server Expert Configuration opens the dialog described in
Section HTTP Server Configuration (page 495).
494
Reference
495
also restart or reload the Web server (see Section 30.3, Starting and Stopping Apache
(page 497) for details). These commands are effective immediately.
Figure 30.3 HTTP Server Configuration: Listen Ports and Addresses
Server Modules
You can change the status (enabled or disabled) of Apache2 modules by clicking Toggle
Status. Click Add Module to add a new module that is already installed but not yet
listed. Learn more about modules in Section 30.4, Installing, Activating, and Configuring Modules (page 499).
496
Reference
497
startssl
Starts Apache with SSL support if it is not already running. For more information
about SSL support, refer to Section 30.6, Setting Up a Secure Web Server with
SSL (page 509).
stop
Stops Apache by terminating the parent process.
restart
Stops then restarts Apache. Starts the Web server if it was not running before.
try-restart
Stops then restarts Apache only if it has been running before.
reload or graceful
Stops the Web server by advising all forked Apache processes to first finish their
requests before shutting down. As each process dies, it is replaced by a newly
started one, resulting in complete restart of Apache.
TIP
rcapache2 reload is the preferred method of restarting Apache in
production environments, for example, to activate a change in the configuration, because it allows all clients to be served without causing connection
break-offs.
configtest
Checks the syntax of the configuration files without affecting a running Web
server. Because this check is forced every time the server is started, reloaded, or
restarted, it is usually not necessary to run the test explicitly (if a configuration error
is found, the Web server is not started, reloaded, or restarted).
probe
Probes for the necessity of a reload (checks whether the configuration has changed)
and suggests the required arguments for the rcapache2 command.
server-status and full-server-status
Dumps a short or full status screen, respectively. Requires either lynx or w3m installed as well as the module mod_status enabled. In addition to that, status must
be added to APACHE_SERVER_FLAGS in the file /etc/sysconfig/apache2.
498
Reference
499
500
Reference
mod_actions
Provides methods to execute a script whenever a certain MIME type (such as
application/pdf), a file with a specific extension (like .rpm), or a certain
request method (such as GET) is requested. This module is enabled by default.
mod_alias
Provides Alias and Redirect directives with which you can map a URl to a
specific directory (Alias) or redirect a requested URL to another location. This
module is enabled by default.
mod_auth*
The authentication modules provide different authentication methods: basic authentication with mod_auth_basic or digest authentication with mod_auth_digest. Digest
authentication in Apache 2.2 is considered experimental.
mod_auth_basic and mod_auth_digest must be combined with an authentication
provider module, mod_authn_* (for example, mod_authn_file for text filebased
authentication) and with an authorization module mod_authz_* (for example,
mod_authz_user for user authorization).
More information about this topic is available in the Authentication HOWTO at
http://httpd.apache.org/docs/2.2/howto/auth.html
mod_autoindex
Autoindex generates directory listings when no index file (for example, index
.html) is present. The look and feel of these indexes is configurable. This module
is enabled by default. However, directory listings are disabled by default via the
Options directiveoverwrite this setting in your virtual host configuration. The
default configuration file for this module is located at /etc/apache2/mod
_autoindex-defaults.conf.
mod_cgi
mod_cgi is needed to execute CGI scripts. This module is enabled by default.
mod_deflate
Using this module, Apache can be configured to compress given file types on the
fly before delivering them.
mod_dir
mod_dir provides the DirectoryIndex directive with which you can configure
which files are automatically delivered when a directory is requested (index
The Apache HTTP Server
501
.html by default). It also provides an automatic redirect to the correct URl when
a directory request does not contain a trailing slash. This module is enabled by default.
mod_env
Controls the environment that is passed to CGI scripts or SSI pages. Environment
variables can be set or unset or passed from the shell that invoked the httpd process.
This module is enabled by default.
mod_expires
With mod_expires, you can control how often proxy and browser caches refresh
your documents by sending an Expires header. This module is enabled by default.
mod_include
mod_include lets you use Server Side Includes (SSI), which provide a basic functionality to generate HTML pages dynamically. This module is enabled by default.
mod_info
Provides a comprehensive overview of the server configuration under http://localhost/server-info/. For security reasons, you should always limit access to this URL.
By default only localhost is allowed to access this URL. mod_info is configured
at /etc/apache2/mod_info.conf
mod_log_config
With this module, you can configure the looks of the Apache log files. This module
is enabled by default.
mod_mime
The mime module takes care that a file is delivered with the correct MIME header
based on the filename's extension (for example text/html for HTML documents).
This module is enabled by default.
mod_negotiation
Necessary for content negotiation. See http://httpd.apache.org/docs/
2.2/content-negotiation.html for more information. This module is
enabled by default.
mod_rewrite
Provides the functionality of mod_alias, but offers more features and flexibility.
With mod_rewrite, you can redirect URLs based on multiple rules, request headers,
and more.
502
Reference
mod_setenvif
Sets environment variables based on details of the client's request, such as the
browser string the client sends, or the client's IP address. This module is enabled
by default.
mod_speling
mod_speling attempts to automatically correct typographical errors in URLs, such
as capitalization errors.
mod_ssl
Enables encrypted connections between Web server and clients. See Section 30.6,
Setting Up a Secure Web Server with SSL (page 509) for details. This module is
enabled by default.
mod_status
Provides information on server activity and performance under http://localhost/server-status/. For security reasons, you should always limit access to this URL. By
default, only localhost is allowed to access this URl. mod_status is configured
at /etc/apache2/mod_status.conf
mod_suexec
mod_suexec lets you run CGI scripts under a different user and group. This module
is enabled by default.
mod_userdir
Enables user-specific directories available under ~user/. The UserDir directive
must be specified in the configuration. This module is enabled by default.
Prefork MPM
The prefork MPM implements a nonthreaded, preforking Web server. It makes the Web
server behave similarly to Apache version 1.x in that it isolates each request and handles
it by forking a separate child process. Thus problematic requests cannot affect others,
avoiding a lockup of the Web server.
503
While providing stability with this process-based approach, the prefork MPM consumes
more system resources than its counterpart, the worker MPM. The prefork MPM is
considered the default MPM for Unix-based operating systems.
IMPORTANT: MPMs in This Document
This document assumes Apache is used with the prefork MPM.
Worker MPM
The worker MPM provides a multithreaded Web server. A thread is a lighter form
of a process. The advantage of a thread over a process is its lower resource consumption.
Instead of only forking child processes, the worker MPM serves requests by using
threads with server processes. The preforked child processes are multithreaded. This
approach makes Apache perform better by consuming fewer system resources than the
prefork MPM.
One major disadvantage is the stability of the worker MPM: if a thread becomes corrupt,
all threads of a process can be affected. In the worst case, this may result in a server
crash. Especially when using the Common Gateway Interface (CGI) with Apache under
heavy load, internal server errors might occur due to threads unable to communicate
with system resources. Another argument against using the worker MPM with Apache
is that not all available Apache modules are thread-safe and thus cannot be used in
conjunction with the worker MPM.
WARNING: Using PHP Modules with MPMs
Not all available PHP modules are thread-safe. Using the worker MPM with
mod_php is strongly discouraged.
504
Reference
505
30.4.6 Compilation
Apache can be extended by advanced users by writing custom modules. To develop
modules for Apache or compile third-party modules, the package apache2-devel
is required along with the corresponding development tools. apache2-devel also
contains the apxs2 tools, which are necessary for compiling additional modules for
Apache.
apxs2 enables the compilation and installation of modules from source code (including
the required changes to the configuration files), which creates dynamic shared objects
(DSOs) that can be loaded into Apache at runtime.
The apxs2 binaries are located under /usr/sbin:
/usr/sbin/apxs2suitable for building an extension module that works with
any MPM. The installation location is /usr/lib/apache2.
/usr/sbin/apxs2-preforksuitable for prefork MPM modules. The installation location is /usr/lib/apache2-prefork.
/usr/sbin/apxs2-workersuitable for worker MPM modules.
apxs2 installs modules so they can be used for all MPMs. The other two programs
install modules so they can only be used for the respective MPMs. apxs2 installs
modules in /usr/lib/apache2, apxs2-prefork and apxs2-worker installs
modules in /usr/lib/apache2-prefork or /usr/lib/apache2-worker.
Install and activate a module from source code with the commands cd
/path/to/module/source; apxs2 -cia mod_foo.c (-c compiles the
module, -i installs it, and -a activates it). Other options of apxs2 are described in
the apxs2(1) man page.
506
Reference
507
Tells Apache to handle all files within this directory as CGI scripts.
Tells the server to treat files with the extensions .pl and .cgi as CGI scripts. Adjust
according to your needs.
The Order and Allow directives control the default access state and the order
in which Allow and Deny directives are evaluated. In this case deny statements
are evaluated before allow statements and access from everywhere is enabled.
508
Reference
30.5.3 Troubleshooting
If you do not see the output of the test program but an error message instead, check the
following:
CGI Troubleshooting
Have you reloaded the server after having changed the configuration? Check with
rcapache2 probe.
If you have configured your custom CGI directory, is it configured properly? If in
doubt, try the script within the default CGI directory /srv/www/cgi-bin/ and
call it with http://localhost/cgi-bin/test.cgi.
Are the file permissions correct? Change into the CGI directory and execute the
ls -l test.cgi. Its output should start with
-rwxr-xr-x
1 root root
Make sure that the script does not contain programming errors. If you have not
changed test.cgi, this should not be the case, but if you are using your own programs,
always make sure that they do not contain programming errors.
509
For this purpose, the server sends an SSL certificate that holds information proving the
server's valid identity before any request to a URL is answered. In turn, this guarantees
that the server is the uniquely correct end point for the communication. Additionally,
the certificate generates an encrypted connection between client and server that can
transport information without the risk of exposing sensitive, plain-text content.
mod_ssl does not implement the SSL/TSL protocols itself, but acts as an interface between Apache and an SSL library. In openSUSE, the OpenSSL library is used. OpenSSL
is automatically installed with Apache.
The most visible effect of using mod_ssl with Apache is that URLs are prefixed with
https:// instead of http://.
510
Reference
/etc/apache2/ssl.crt/server.crt
/etc/apache2/ssl.key/server.key
/etc/apache2/ssl.csr/server.csr
A copy of ca.crt is also placed at /srv/www/htdocs/CA.crt for download.
IMPORTANT
A dummy certificate should never be used on a production system. Only use
it for testing purposes.
511
answer every question. If one does not apply to you or you want to leave it blank,
use .. Common name is the name of the CA itselfchoose a significant name,
such as My company CA.
4 Generating X.509 certificate for CA signed by itself
Choose certificate version 3 (the default).
5 Generating RSA private key for SERVER (1024 bit)
No interaction needed.
6 Generating X.509 certificate signing request for SERVER
Create the distinguished name for the server key here. Questions are almost
identical to the ones already answered for the CA's distinguished name. The data
entered here applies to the Web server and does not necessarily need to be identical to the CA's data (for example, if the server is located elsewhere).
IMPORTANT: Selecting a Common Name
The common name you enter here must be the fully qualified hostname
of your secure server (for example, www.example.com). Otherwise the
browser issues a warning that the certificate does not match the server
when accessing the Web server.
7 Generating X.509 certificate signed by own CA
Choose certificate version 3 (the default).
8 Encrypting RSA private key of CA with a pass phrase
for security
It is strongly recommended to encrypt the private key of the CA with a password,
so choose Y and enter a password.
9 Encrypting RSA private key of SERVER with a pass phrase
for security
Encrypting the server key with a password requires you to enter this password
every time you start the Web server. This makes it difficult to automatically start
512
Reference
the server on boot or to restart the Web server. Therefore, it is common sense to
say N to this question. Keep in mind that your key is unprotected when not encrypted with a password and make sure that only authorized persons have access
to the key.
IMPORTANT: Encrypting the Server Key
If you choose to encrypt the server key with a password, increase the
value for APACHE_TIMEOUT in /etc/sysconfig/apache2. Otherwise
you do not have enough time to enter the passphrase before the attempt
to start the server is stopped unsuccessfully.
The script's result page presents a list of certificates and keys it has generated. Contrary
to what the script outputs, the files have not been generated in the local directory conf,
but to the correct locations under /etc/apache2/.
The last step is to copy the CA certificate file from /etc/apache2/ssl.crt/ca
.crt to a location where your users can access it in order to incorporate it into the list
of known and trusted CAs in their Web browsers. Otherwise a browser complains that
the certificate was issued by an unknown authority. The certificate is valid for one year.
IMPORTANT: Self-Signed Certificates
Only use a self-signed certificate on a Web server that is accessed by people
who know and trust you as a certificate authority. It is not recommended to
use such a certificate on a public shop, for example.
513
When requesting an officially signed certificate, you do not send a certificate to the
CA. Instead, issue a Certificate Signing Request (CSR). To create a CSR, call the script
/usr/share/ssl/misc/CA.sh -newreq.
First the script asks for a password with which the CSR should be encrypted. Then you
are asked to enter a distinguished name. This requires you to answer a few questions,
such as country name or organization name. Enter valid dataeverything you enter
here later shows up in the certificate and is checked. You do not need to answer every
question. If one does not apply to you or you want to leave it blank, use .. Common
name is the name of the CA itselfchoose a significant name, such as My company
CA. Last, a challenge password and an alternative company name must be entered.
Find the CSR in the directory from which you called the script. The file is named
newreq.pem.
Reference
mented. Refer to Section Virtual Host Configuration (page 486) for the general virtual
host configuration.
To get started, it should be sufficient to adjust the values for the following directives:
DocumentRoot
ServerName
ServerAdmin
ErrorLog
TransferLog
IMPORTANT: Name-Based Virtual Hosts and SSL
It is not possible to run multiple SSL-enabled virtual hosts on a server with
only one IP address. Users connecting to such a setup receive a warning message
stating that the certificate does not match the server name every time they
visit the URL. A separate IP address or port is necessary for every SSL-enabled
domain to achieve communication based on a valid SSL certificate.
515
516
Reference
tion. The openSUSE default configuration does not allow execution of CGI scripts from
everywhere.
All CGI scripts run as the same user, so different scripts can potentially conflict with
each other. The module suEXEC lets you run CGI scripts under a different user and
group.
30.8 Troubleshooting
If Apache does not start, the Web page is not accessible, or users cannot connect to the
Web server, it is important to find the cause of the problem. Here are some typical
places to look for error explanations and important things to check.
First, rcapache2 (described in Section 30.3, Starting and Stopping Apache
(page 497)) is verbose about errors, so can be quite helpful if it is actually used for operating Apache. Sometimes it is tempting to use the binary /usr/sbin/httpd2 for
starting or stopping the Web server. Avoid doing this and use the rcapache2 script
instead. rcapache2 even provides tips and hints for solving configuration errors.
Second, the importance of log files cannot be overemphasized. In case of both fatal and
nonfatal errors, the Apache log files, mainly the error log file, are the places to look for
causes. Additionally, you can control the verbosity of the logged messages with the
LogLevel directive if more detail is needed in the log files. By default, the error log
file is located at /var/log/apache2/error_log.
517
518
Reference
30.9.3 Development
More information about developing Apache modules or about getting involved in the
Apache Web server project are available at the following locations:
Apache Developer Information
http://httpd.apache.org/dev/
Apache Developer Documentation
http://httpd.apache.org/docs/2.2/developer/
Writing Apache Modules with Perl and C
http://www.modperl.com/
519
520
Reference
31
Using the YaST FTP Server module, you can configure your machine to function as a
FTP server. Anonymous and/or authenticated users can connect to your machine and
download and, depending on the configuration, upload files using the FTP protocol.
YaST provides a unified configuration interface for various FTP server daemons installed
on your system.
The YaST FTP Server configuration module can be used to configure two different
FTP server daemons: vsftpd (Very Secure FTP Daemon) and pure-ftpd. Only installed
servers can be configured. Standard openSUSE media does not contain the pure-ftpd
package. However, if the pure-ftpd package is installed from another repository, it can
be configured using the YaST module.
vsftpd and pure-ftpd have slightly different configuration options, especially in the
Experts Settings dialog. This chapter describes the settings of the vsftpd for being the
default server for openSUSE.
To configure the FTP server with run YaST and choose Network Services > FTP
Server. If no FTP server is installed, you will be asked which server should be installed.
Choose a server and confirm the dialog.
521
the system boot and starting it manually. If the FTP server should be started only after
FTP connection request, choose Via xinetd.
The current status of the FTP server is shown in the Switch On and Off frame. Start the
FTP server by pressing Start FTP Now. To stop the server, press Stop FTP Now. After
having changed the settings of the server press Save Settings and Restart FTP Now.
Your configurations will be saved by leaving the configuration module with Accept as
well.
The Select Service frame of the FTP Start-Up dialog shows which FTP server is used.
Either vsftpd (Very Secure FTP Daemon) or pure-ftpd can be used. If both servers are
installed, you can choose between them. The pure-ftpd package is not included in the
standard openSUSE media so you have to install it from a different installation source
if you want to use it.
Figure 31.1 FTP Server Configuration Start-Up
522
Reference
If you check the Chroot Everyone option, all local users will be placed in a chroot jail
in their home directory after login. This option has security implications, especially if
the users have upload permission or shell access, so be careful enabling this option.
If you check the Verbose Logging option, all FTP requests and responses are logged.
In the Umask for Anonymous and Umask for Authenticated Users set the file creation
mask for anonymous and authorized users respectively.
In the FTP Directories frame set the directories used for anonymous and authorized
users. The default FTP directory for anonymous users is /srv/ftp. Note that vsftpd
does not allow this directory to be writable for all users. The subdirectory upload
with write permissions for anonymous users is created instead.
NOTE
pure-ftpd allows the FTP directory for anonymous users to be writable. Make
sure you removed the write permissions in the directory that was used with
pure-ftpd before switching back to the vsftpd server.
31.4 Authentication
In the Enable/Disable Anonymous and Local Users frame of the Authentication dialog,
you are able to set which users are allowed to access your FTP server. You can grant
523
access only for anonymous users, only for authenticated users with accounts on the
system or for both types of users.
If you want to allow users to upload files to the FTP server, check Enable Upload in
the Uploading frame of the Authentication dialog. Here you are able to allow uploading
or creating directories even for anonymous users by checking the respective box.
NOTE
If a vsftpd server is used and you want anonymous users to be able to upload
files or create directories, a subdirectory with writing permissions for all users
has to be created in this directory.
524
Reference
Part V. Mobility
32
Power Management
Power Management
527
32.2 ACPI
ACPI (advanced configuration and power interface) was designed to enable the operating
system to set up and control the individual hardware components. ACPI supersedes
both PnP and APM. It delivers information about the battery, AC adapter, temperature,
fan, and system events, like close lid or battery low.
528
Reference
The BIOS provides tables containing information about the individual components and
hardware access methods. The operating system uses this information for tasks like
assigning interrupts or activating and deactivating components. Because the operating
system executes commands stored in the BIOS, the functionality depends on the BIOS
implementation. The tables ACPI can detect and load are reported in /var/log/boot
.msg. See Section 32.2.4, Troubleshooting (page 534) for more information about
troubleshooting ACPI problems.
Power Management
529
/proc/acpi/sleep
Provides information about possible sleep states.
/proc/acpi/event
All events are reported here and processed by the Powersave daemon
(powersaved). If no daemon accesses this file, events, such as a brief click on
the power button or closing the lid, can be read with cat /proc/acpi/event
(terminate with Ctrl + C).
/proc/acpi/dsdt and /proc/acpi/fadt
These files contain the ACPI tables DSDT (differentiated system description table)
and FADT (fixed ACPI description table). They can be read with acpidmp,
acpidisasm, and dmdecode. These programs and their documentation are located in the package pmtools. For example, acpidmp DSDT | acpidisasm.
/proc/acpi/ac_adapter/AC/state
Shows whether the AC adapter is connected.
/proc/acpi/battery/BAT*/{alarm,info,state}
Detailed information about the battery state. The charge level is read by comparing
the last full capacity from info with the remaining capacity
from state. A more comfortable way to do this is to use one of the special programs introduced in Section 32.2.3, ACPI Tools (page 534). The charge level at
which a battery event (such as warning, low and critical) is triggered can be specified
in alarm.
/proc/acpi/button
This directory contains information about various switches, like the laptop lid and
buttons.
/proc/acpi/fan/FAN/state
Shows if the fan is currently active. Activate or deactivate the fan manually by
writing 0 (on) or 3 (off) into this file. However, both the ACPI code in the kernel
and the hardware (or the BIOS) overwrite this setting when the system gets too
warm.
/proc/acpi/processor/*
A separate subdirectory is kept for each CPU included in your system.
530
Reference
/proc/acpi/processor/*/info
Information about the energy saving options of the processor.
/proc/acpi/processor/*/power
Information about the current processor state. An asterisk next to C2 indicates that
the processor is idle. This is the most frequent state, as can be seen from the usage
value.
/proc/acpi/processor/*/throttling
Can be used to set the throttling of the processor clock. Usually, throttling is possible
in eight levels. This is independent of the frequency control of the CPU.
/proc/acpi/processor/*/limit
If the performance (outdated) and the throttling are automatically controlled by a
daemon, the maximum limits can be specified here. Some of the limits are determined by the system. Some can be adjusted by the user.
/proc/acpi/thermal_zone/
A separate subdirectory exists for every thermal zone. A thermal zone is an area
with similar thermal properties whose number and names are designated by the
hardware manufacturer. However, many of the possibilities offered by ACPI are
rarely implemented. Instead, the temperature control is handled conventionally by
the BIOS. The operating system is not given much opportunity to intervene, because
the life span of the hardware is at stake. Therefore, some of the files only have a
theoretical value.
/proc/acpi/thermal_zone/*/temperature
Current temperature of the thermal zone.
/proc/acpi/thermal_zone/*/state
The state indicates if everything is ok or if ACPI applies active or passive
cooling. In the case of ACPI-independent fan control, this state is always ok.
/proc/acpi/thermal_zone/*/cooling_mode
Select the cooling method controlled by ACPI. Choose from passive (less performance, economical) or active cooling mode (full performance, fan noise).
/proc/acpi/thermal_zone/*/trip_points
Enables the determination of temperature limits for triggering specific actions, like
passive or active cooling, suspension (hot), or a shutdown (critical). The
Power Management
531
possible actions are defined in the DSDT (device-dependent). The trip points determined in the ACPI specification are critical, hot, passive, active1, and
active2. Even if not all of them are implemented, they must always be entered
in this file in this order. For example, the entry echo 90:0:70:0:0 >
trip_points sets the temperature for critical to 90 and the temperature
for passive to 70 (all temperatures measured in degrees Celsius).
/proc/acpi/thermal_zone/*/polling_frequency
If the value in temperature is not updated automatically when the temperature
changes, toggle the polling mode here. The command echo X >
/proc/acpi/thermal_zone/*/polling_frequency causes the temperature to be queried every X seconds. Set X=0 to disable polling.
None of these settings, information, and events need to be edited manually. This can
be done with the Powersave daemon (powersaved) and its various front-ends, like
powersave, kpowersave, and wmpowersave. See Section 32.2.3, ACPI Tools
(page 534).
532
Reference
the CPU frequency is adjusted in regard to the current system load. By default,
one of the kernel implementations is used. However, on some hardware or in
regard to specific processors or drivers, the userspace implementation is still
the only working solution.
ondemand governor
This is the kernel implementation of a dynamic CPU frequency policy and
should work on most systems. As soon as there is a high system load, the CPU
frequency is immediately increased. It is lowered on a low system load.
conservative governor
This governor is similar to the on demand implementation, except that a more
conservative policy is used. The load of the system must be high for a specific
amount of time before the CPU frequency is increased.
powersave governor
The cpu frequency is statically set to the lowest possible.
performance governor
The cpu frequency is statically set to the highest possible.
Throttling the Clock Frequency
This technology omits a certain percentage of the clock signal impulses for the
CPU. At 25% throttling, every fourth impulse is omitted. At 87.5%, only every
eighth impulse reaches the processor. However, the energy savings are a little less
than linear. Normally, throttling is only used if frequency scaling is not available
or to maximize power savings. This technology, too, must be controlled by a special
process. The system interface is /proc/acpi/processor/*/throttling.
Putting the Processor to Sleep
The operating system puts the processor to sleep whenever there is nothing to do.
In this case, the operating system sends the CPU a halt command. There are three
states: C1, C2, and C3. In the most economic state, C3, even the synchronization
of the processor cache with the main memory is halted. Therefore, this state can
only be applied if no other device modifies the contents of the main memory via
bus master activity. Some drivers prevent the use of C3. The current state is displayed in /proc/acpi/processor/*/power.
Frequency scaling and throttling are only relevant if the processor is busy, because the
most economic C state is applied anyway when the processor is idle. If the CPU is busy,
frequency scaling is the recommended power saving method. Often the processor only
Power Management
533
works with a partial load. In this case, it can be run with a lower frequency. Usually,
dynamic frequency scaling controlled by the kernel on demand governor or a daemon,
such as powersaved, is the best approach. A static setting to a low frequency is useful
for battery operation or if you want the computer to be cool or quiet.
Throttling should be used as the last resort, for example, to extend the battery operation
time despite a high system load. However, some systems do not run smoothly when
they are throttled too much. Moreover, CPU throttling does not make sense if the CPU
has little to do.
In openSUSE these technologies are controlled by the powersave daemon. The configuration is explained in Section 32.4, The powersave Package (page 537).
32.2.4 Troubleshooting
There are two different types of problems. On one hand, the ACPI code of the kernel
may contain bugs that were not detected in time. In this case, a solution will be made
available for download. More often, however, the problems are caused by the BIOS.
Sometimes, deviations from the ACPI specification are purposely integrated in the
BIOS to circumvent errors in the ACPI implementation in other widespread operating
systems. Hardware components that have serious errors in the ACPI implementation
are recorded in a blacklist that prevents the Linux kernel from using ACPI for these
components.
The first thing to do when problems are encountered is to update the BIOS. If the
computer does not boot at all, one of the following boot parameters may be helpful:
pci=noacpi
Do not use ACPI for configuring the PCI devices.
534
Reference
acpi=ht
Only perform a simple resource configuration. Do not use ACPI for other purposes.
acpi=off
Disable ACPI.
WARNING: Problems Booting without ACPI
Some newer machines (especially SMP systems and AMD64 systems) need ACPI
for configuring the hardware correctly. On these machines, disabling ACPI can
cause problems.
Monitor the boot messages of the system with the command dmesg | grep -2i
acpi (or all messages, because the problem may not be caused by ACPI) after booting.
If an error occurs while parsing an ACPI table, the most important tablethe DSDTcan be replaced with an improved version. In this case, the faulty DSDT of the
BIOS is ignored. The procedure is described in Section 32.4.3, Troubleshooting
(page 541).
In the kernel configuration, there is a switch for activating ACPI debug messages. If a
kernel with ACPI debugging is compiled and installed, experts searching for an error
can be supported with detailed information.
If you experience BIOS or hardware problems, it is always advisable to contact the
manufacturers. Especially if they do not always provide assistance for Linux, they
should be confronted with the problems. Manufacturers will only take the issue seriously
if they realize that an adequate number of their customers use Linux.
Power Management
535
536
Reference
down. To avoid this, a special kernel extension has been developed for mobile devices.
See /usr/src/linux/Documentation/laptop-mode.txt for details.
Another important factor is the way active programs behave. For example, good editors
regularly write hidden backups of the currently modified file to the hard disk, causing
the disk to wake up. Features like this can be disabled at the expense of data integrity.
In this connection, the mail daemon postfix makes use of the variable
POSTFIX_LAPTOP. If this variable is set to yes, postfix accesses the hard disk far
less frequently. However, this is irrelevant if the interval for kupdated was increased.
Power Management
537
/etc/sysconfig/powersave/common
This file contains general settings for the powersave daemon. For example, the
amount of debug messages in /var/log/messages can be increased by increasing the value of the variable DEBUG.
/etc/sysconfig/powersave/events
The powersave daemon needs this file for processing system events. An event can
be assigned external actions or actions performed by the daemon itself. For external
actions, the daemon tries to run an executable file (usually a Bash script) in /usr/
lib/powersave/scripts/. Predefined internal actions are:
ignore
throttle
dethrottle
suspend_to_disk
suspend_to_ram
standby
notify
screen_saver
reread_cpu_capabilities
throttle slows down the processor by the value defined in MAX_THROTTLING.
This value depends on the current scheme. dethrottle sets the processor to full
performance. suspend_to_disk, suspend_to_ram, and standby trigger
the system event for a sleep mode. These three actions are generally responsible
for triggering the sleep mode, but they should always be associated with specific
system events.
The directory /usr/lib/powersave/scripts contains scripts for processing
events:
switch_vt
Useful if the screen is displaced after a suspend or standby.
538
Reference
wm_logout
Saves the settings and logs out from GNOME, KDE, or other window managers.
wm_shutdown
Saves the GNOME or KDE settings and shuts down the system.
If, for example, the variable
EVENT_GLOBAL_SUSPEND2DISK="prepare_suspend_to_disk
do_suspend_to_disk" is set, the two scripts or actions are processed in the
specified order as soon as the user gives powersaved the command for the sleep
mode suspend to disk. The daemon runs the external script /usr/lib/
powersave/scripts/prepare_suspend_to_disk. After this script has
been processed successfully, the daemon runs the internal action
do_suspend_to_disk and sets the computer to the sleep mode after the script
has unloaded critical modules and stopped services.
The actions for the event of a sleep button could be modified as in
EVENT_BUTTON_SLEEP="notify suspend_to_disk". In this case, the
user is informed about the suspend by a pop-up window in X or a message on the
console. Subsequently, the event EVENT_GLOBAL_SUSPEND2DISK is generated,
resulting in the execution of the mentioned actions and a secure system suspend
mode. The internal action notify can be customized using the variable
NOTIFY_METHOD in /etc/sysconfig/powersave/common.
/etc/sysconfig/powersave/cpufreq
Contains variables for optimizing the dynamic CPU frequency settings and whether
the user space or the kernel implementation should be used.
/etc/sysconfig/powersave/battery
Contains battery limits and other battery-specific settings.
/etc/sysconfig/powersave/thermal
Activates cooling and thermal control. Details about this subject are available in
the file /usr/share/doc/packages/powersave/README.thermal.
/etc/sysconfig/powersave/scheme_*
These are the various schemes that adapt the power consumption to certain deployment scenarios. A number of schemes are preconfigured and can be used as they
are. Custom schemes can be saved here.
Power Management
539
540
Reference
32.4.3 Troubleshooting
All error messages and alerts are logged in the file /var/log/messages. If you
cannot find the needed information, increase the verbosity of the messages of powersave
using DEBUG in the file /etc/sysconfig/powersave/common. Increase the
value of the variable to 7 or even 15 and restart the daemon. The more detailed error
messages in /var/log/messages should help you to find the error. The following
sections cover the most common problems with powersave and the different sleep
modes.
Power Management
541
Reference
Wireless Communication
33
There are several possibilities for using your Linux system to communicate with other
computers, cellular phones, or peripheral devices. WLAN (wireless LAN) can be used
to network laptops. Bluetooth can be used to connect individual system components
(mouse, keyboard), peripheral devices, cellular phones, PDAs, and individual computers
with each other. IrDA is mostly used for communication with PDAs or cellular phones.
This chapter introduces all three technologies and their configuration.
Wireless Communication
543
Table 33.1
Name
Band (GHz)
802.11
2.4
802.11b
2.4
11
Widespread
802.11a
54
Less common
802.11g
2.4
54
Backward-compatible with
11b
33.1.1 Hardware
802.11 cards are not supported by openSUSE. Most cards using 802.11a, 802.11b,
and 802.11g are supported. New cards usually comply with the 802.11g standard, but
cards using 802.11b are still available. Normally, cards with the following chips are
supported:
ADMTek ADM8211
Aironet 4500, 4800
Atheros 5210, 5211, 5212
Atmel at76c502, at76c503, at76c504, at76c506
Broadcom BCM43xx
Intel PRO/Wireless 2100, 2200BG, 2915ABG, 3945ABG
Intel Wireless WiFi Link 4965GN
Intersil Prism2/2.5/3
Intersil PrismGT
544
Reference
Lucent/Agere Hermes
Ralink RT2400, RT2500, RT2570, RT61, RT73
Realtek RTL8187
Texas Instruments ACX100, ACX111
ZyDAS zd1201
A number of older cards that are rarely used and no longer available are also supported.
An extensive list of WLAN cards and the chips they use is available at the Web site of
AbsoluteValue Systems at http://www.linux-wlan.org/docs/wlan
_adapters.html.gz. Find an overview of the various WLAN chips at http://
wiki.uni-konstanz.de/wiki/bin/view/Wireless/ListeChipsatz.
Some cards need a firmware image that must be loaded into the card when the driver
is initialized. This is the case with Intersil PrismGT, Atmel, and TI ACX100 and
ACX111. The firmware can easily be installed with the YaST Online Update. The
firmware for Intel PRO/Wireless cards ships with openSUSE and is automatically installed by YaST as soon as a card of this type is detected. More information about this
subject is available in the installed system in /usr/share/doc/packages/
wireless-tools/README.firmware.
33.1.2 Function
In wireless networking, various techniques and configurations are used to ensure fast,
high-quality, and secure connections. Different operating types suit different setups. It
can be difficult to choose the right authentication method. The available encryption
methods have different advantages and pitfalls.
Operating Mode
Basically, wireless networks can be classified as managed networks and ad-hoc networks.
Managed networks have a managing element: the access point. In this mode (also referred to as infrastructure mode), all connections of the WLAN stations in the network
run over the access point, which may also serve as a connection to an ethernet. Ad-hoc
networks do not have an access point. The stations communicate directly with each
other. The transmission range and number of participating stations are greatly limited
Wireless Communication
545
Authentication
To make sure that only authorized stations can connect, various authentication mechanisms are used in managed networks:
Open
An open system is a system that does not require authentication. Any station can
join the network. Nevertheless, WEP encryption (see Section Encryption
(page 547)) can be used.
Shared Key (according to IEEE 802.11)
In this procedure, the WEP key is used for the authentication. However, this procedure is not recommended, because it makes the WEP key more susceptible to attacks. All an attacker needs to do is to listen long enough to the communication
between the station and the access point. During the authentication process, both
sides exchange the same information, once in encrypted form and once in unencrypted form. This makes it possible for the key to be reconstructed with suitable
tools. Because this method makes use of the WEP key for the authentication and
for the encryption, it does not enhance the security of the network. A station that
has the correct WEP key can authenticate, encrypt, and decrypt. A station that does
not have the key cannot decrypt received packets. Accordingly, it cannot communicate, regardless of whether it had to authenticate itself.
WPA-PSK (according to IEEE 802.1x)
WPA-PSK (PSK stands for preshared key) works similarly to the Shared Key
procedure. All participating stations as well as the access point need the same key.
The key is 256 bits in length and is usually entered as a passphrase. This system
546
Reference
does not need a complex key management like WPA-EAP and is more suitable for
private use. Therefore, WPA-PSK is sometimes referred to as WPA Home.
WPA-EAP (according to IEEE 802.1x)
Actually, WPA-EAP is not an authentication system but a protocol for transporting
authentication information. WPA-EAP is used to protect wireless networks in enterprises. In private networks, it is scarcely used. For this reason, WPA-EAP is
sometimes referred to as WPA Enterprise.
WPA-EAP needs a Radius server to authenticate users. EAP offers three different
methods for connecting and authenticating to the server: TLS (Transport Layer
Security), TTLS (Tunneled Transport Layer Security), and PEAP (Protected Extensible Authentication Protocol). In a nutshell, these options work as follows:
EAP-TLS
TLS authentication relies on the mutual exchange of certificates both for
server and client. First, the server presents its certificate to the client where it
is evaluated. If the certificate is considered valid, the client in turn presents its
certificate to the server. While TLS is secure, it requires a working certification
management infrastructure in your network. This infrastructure is rarely found
in private networks.
EAP-TTLS and PEAP
Both TTLS and PEAP are two-stage protocols. In the first stage, a secure is
established and in the second one the client authentication data is exchanged.
They require far less certification management overhead than TLS, if any.
Encryption
There are various encryption methods to ensure that no unauthorized person can read
the data packets that are exchanged in a wireless network or gain access to the network:
WEP (defined in IEEE 802.11)
This standard makes use of the RC4 encryption algorithm, originally with a key
length of 40 bits, later also with 104 bits. Often, the length is declared as 64 bits
or 128 bits, depending on whether the 24 bits of the initialization vector are included.
However, this standard has some weaknesses. Attacks against the keys generated
by this system may be successful. Nevertheless, it is better to use WEP than not
encrypt the network at all.
Wireless Communication
547
548
Reference
Operating Mode
A station can be integrated in a WLAN in three different modes. The suitable mode
depends on the network in which to communicate: Ad-hoc (peer-to-peer network
without access point), Managed (network is managed by an access point), or
Master (your network card should be used as the access point). To use any of the
WPA-PSK or WPA-EAP modes, the operating mode must be set to managed.
Network Name (ESSID)
All stations in a wireless network need the same ESSID for communicating with
each other. If nothing is specified, the card automatically selects an access point,
which may not be the one you intended to use.
Authentication Mode
Select a suitable authentication method for your network: Open, Shared Key, WPAPSK, or WPA-EAP. If you select WPA authentication, a network name must be
set.
Expert Settings
This button opens a dialog for the detailed configuration of your WLAN connection.
A detailed description of this dialog is provided later.
After completing the basic settings, your station is ready for deployment in the WLAN.
IMPORTANT: Security in Wireless Networks
Be sure to use one of the supported authentication and encryption methods
to protect your network traffic. Unencrypted WLAN connections allow third
parties to intercept all network data. Even a weak encryption (WEP) is better
than none at all. Refer to Section Encryption (page 547) and Section Security
(page 552) for information.
Depending on the selected authentication method, YaST prompts you to fine-tune the
settings in another dialog. For Open, there is nothing to configure, because this setting
implements unencrypted operation without authentication.
Shared Key
Set a key input type. Choose from Passphrase, ASCII, or Hexadecimal. You may
keep up to four different keys to encrypt the transmitted data. Click WEP Keys to
enter the key configuration dialog. Set the length of the key: 128 bit or 64 bit. The
default setting is 128 bit. In the list area at the bottom of the dialog, up to four dif-
Wireless Communication
549
ferent keys can be specified for your station to use for the encryption. Press Set as
Default to define one of them as the default key. Unless you change this, YaST
uses the first entered key as the default key. If the standard key is deleted, one of
the other keys must be marked manually as the default key. Click Edit to modify
existing list entries or create new keys. In this case, a pop-up window prompts you
to select an input type (Passphrase, ASCII, or Hexadecimal). If you select
Passphrase, enter a word or a character string from which a key is generated according to the length previously specified. ASCII requests an input of 5 characters
for a 64-bit key and 13 characters for a 128-bit key. For Hexadecimal, enter 10
characters for a 64-bit key or 26 characters for a 128-bit key in hexadecimal notation.
WPA-PSK
To enter a key for WPA-PSK, select the input method Passphrase or Hexadecimal.
In the Passphrase mode, the input must be 8 to 63 characters. In the Hexadecimal
mode, enter 64 characters.
WPA-EAP
Enter the credentials you have been given by your network administrator. For TLS,
provide Identity, Client Certificate, Client Key, and Server Certificate. TTLS and
PEAP require Identity and Password. Server Certificate and Anonymous Identity
are optional. YaST searches for any certificate under /etc/cert, so save the
certificates given to you to this location and restrict access to these files to 0600
(owner read and write).
Click Details to enter the advanced authentication dialog for your WPA-EAP setup.
Select the authentication method for the second stage of EAP-TTLS or EAP-PEAP
communication. If you selected TTLS in the previous dialog, choose any, MD5,
GTC, CHAP, PAP, MSCHAPv1, or MSCHAPv2. If you selected PEAP, choose any,
MD5, GTC, or MSCHAPv2. PEAP version can be used to force the use of a certain
PEAP implementation if the automatically-determined setting does not work for
you.
Click Expert Settings to leave the dialog for the basic configuration of the WLAN
connection and enter the expert configuration. The following options are available in
this dialog:
Channel
The specification of a channel on which the WLAN station should work is only
needed in Ad-hoc and Master modes. In Managed mode, the card automatically
searches the available channels for access points. In Ad-hoc mode, select one of
550
Reference
the 12 offered channels for the communication of your station with the other stations.
In Master mode, determine on which channel your card should offer access point
functionality. The default setting for this option is Auto.
Bit Rate
Depending on the performance of your network, you may want to set a certain bit
rate for the transmission from one point to another. In the default setting Auto, the
system tries to use the highest possible data transmission rate. Some WLAN cards
do not support the setting of bit rates.
Access Point
In an environment with several access points, one of them can be preselected by
specifying the MAC address.
Use Power Management
When you are on the road, use power saving technologies to maximize the operating
time of your battery. More information about power management is available in
Chapter 32, Power Management (page 527).
33.1.4 Utilities
hostap (package hostap) is used to run a WLAN card as an access point. More information about this package is available at the project home page (http://hostap
.epitest.fi/).
kismet (package kismet) is a network diagnosis tool with which to listen to the WLAN
packet traffic. In this way, you can also detect any intrusion attempts in your network.
More information is available at http://www.kismetwireless.net/ and in
the manual page.
Wireless Communication
551
Security
If you want to set up a wireless network, remember that anybody within the transmission
range can easily access it if no security measures are implemented. Therefore, be sure
to activate an encryption method. All WLAN cards and access points support WEP
encryption. Although this is not entirely safe, it does present an obstacle for a potential
attacker. WEP is usually adequate for private use. WPA-PSK would be even better, but
it is not implemented in older access points or routers with WLAN functionality. On
some devices, WPA can be implemented by means of a firmware update. Furthermore,
Linux does not support WPA on all hardware components. If WPA is not available,
WEP is better than no encryption. In enterprises with advanced security requirements,
wireless networks should only be operated with WPA.
33.1.6 Troubleshooting
If your WLAN card fails to respond, check if you have downloaded the needed firmware.
Refer to Section 33.1.1, Hardware (page 544). The following paragraphs cover some
known problems.
552
Reference
the name resolution and the default gateway. This is evident from the fact that you can
ping the router but cannot surf the Internet. The Support Database features an article
on this subject at http://en.opensuse.org/SDB:Name_Resolution_Does
_Not_Work_with_Several_Concurrent_DHCP_Clients.
33.2 Bluetooth
Bluetooth is a wireless technology for connecting various devices, such as cellular
phones, PDAs, peripheral devices, laptops, or system components like the keyboard or
mouse. The name is derived from the Danish king Harold Bluetooth, who united various
warring factions in Scandinavia. The Bluetooth logo is based on the runes for H (resembles a star) and B.
A number of important aspects distinguish Bluetooth from IrDA. First, the individual
devices do not need to see each other directly and, second, several devices can be
connected in a network. However, the maximum data rate is 2.1 Mbps (in the current
version 2.0). Theoretically, Bluetooth can even communicate through walls. In practice,
however, this depends on the properties of the wall and the device class. There are three
device classes with transmission ranges between 10 and 100 meters.
Wireless Communication
553
33.2.1 Basics
The following sections outline the basic principles of how Bluetooth works. Learn
which software requirements need to be met, how Bluetooth interacts with your system,
and how Bluetooth profiles work.
Software
To be able to use Bluetooth, you need a Bluetooth adapter (either a built-in adapter or
an external device), drivers, and a Bluetooth protocol stack. The Linux kernel already
contains the basic drivers for using Bluetooth. The Bluez system is used as protocol
stack. To make sure that the applications work with Bluetooth, the base packages
bluez-libs and bluez-utils must be installed. These packages provide a
number of needed services and utilities. Additionally, some adapters, such as Broadcom
or AVM BlueFritz!, require the bluez-firmware package to be installed. The
bluez-cups package enables printing over Bluetooth connections. If you need to
debug problems with Bluetooth connections, install the package bluez-hcidump
and bluez-test.
General Interaction
A Bluetooth system consists of four interlocked layers that provide the desired functionality:
Hardware
The adapter and a suitable driver for support by the Linux kernel.
Configuration Files
Used for controlling the Bluetooth system.
Daemons
Services that are controlled by the configuration files and provide the functionality.
Applications
The applications allow the functionality provided by the daemons to be used and
controlled by the user.
When inserting a Bluetooth adapter, its driver is loaded by the hotplug system. After
the driver is loaded, the system checks the configuration files to see if Bluetooth should
554
Reference
be started. If this is the case, it determines the services to start. Based on this information,
the respective daemons are started.
Profiles
In Bluetooth, services are defined by means of profiles, such as the file transfer profile,
the basic printing profile, and the personal area network profile. To enable a device to
use the services of another device, both must understand the same profilea piece of
information that is often missing in the device package and manual. Unfortunately,
some manufacturers do not comply strictly with the definitions of the individual profiles.
Despite this, communication between the devices usually works smoothly.
In the following text, local devices are those physically connected to the computer. All
other devices that can only be accessed over wireless connections are referred to as remote devices.
33.2.2 Configuration
This section introduces Bluetooth configuration. Learn which configuration files are
involved, which tools are needed, and how to configure Bluetooth.
The configuration files for the individual components of the Bluez system are located
in the directory /etc/bluetooth. The only exception is the file /etc/
sysconfig/bluetooth for starting the components.
The configuration files described below can only be modified by the user root. Currently, there is no graphical user interface to change all settings. Most of these settings
are only interesting for experienced users with special use cases. Usually, the default
settings should be adequate.
Various settings, such as the device names and the security mode, can be changed in
the configuration file /etc/bluetooth/hcid.conf. Usually, the default settings
should be adequate. The file contains comments describing the options for the various
settings. However, most of the settings included in this file can also be made with
kbluetooth or bluez-gnome.
Two sections in the included file are designated as options and device. The first
contains general information that hcid uses for starting. The latter contains settings for
the individual local Bluetooth devices.
Wireless Communication
555
One of the most important settings of the options section is security auto;.
If set to auto, hcid tries to use the local PIN for incoming connections. If it fails, it
switches to none and establishes the connection anyway. For increased security, this
default setting should be set to user to make sure that the user is requested to enter a
PIN every time a connection is established.
Set the name under which the computer is displayed on the other side in the device
section. The device class, such as Desktop, Laptop, or Server, is defined in this
section. Authentication and encryption are also enabled or disabled here.
hcitool
Use hcitool to determine whether local and remote devices are detected. The command
hcitool dev lists the local devices. The output generates a line in the form
interface_name device_address for every detected local device.
Search for remote devices with the command hcitool inq. Three values are returned
for every detected device: the device address, the clock offset, and the device class.
The device address is important, because other commands use it for identifying the
target device. The clock offset mainly serves a technical purpose. The class specifies
the device type and the service type as a hexadecimal value.
556
Reference
hciconfig
The command /usr/sbin/hciconfig delivers further information about the local
device. If hciconfig is executed without any arguments, the output shows device
information, such as the device name (hciX), the physical device address (a 12-digit
number in the form 00:12:34:56:78), and information about the amount of transmitted data.
hciconfig hci0 name displays the name that is returned by your computer when
it receives requests from remote devices. As well as querying the settings of the local
device, hciconfig can modify these settings. For example, hciconfig hci0
name TEST sets the name to TEST.
sdptool
Use sdptool to check which services are made available by a specific device. The
command sdptool browse device_address returns all services of a device.
Use sdptool search service_code to search for a specific service. This
command scans all accessible devices for the requested service. If one of the devices
offers the service, the program prints the full service name returned by the device together with a brief description. View a list of all possible service codes by entering
sdptool without any parameters.
Wireless Communication
557
Instead of 00:12:34:56:89:90, the output should contain the local device address
baddr1 or baddr2. Now this interface must be assigned an IP address and activated.
On H1, do this with the following two commands:
ip addr add 192.168.1.3/24 dev bnep0
ip link set bnep0 up
Now H1 can be accessed from H2 at the IP 192.168.1.3. Use the command ssh
192.168.1.4 to access H2 from H1, assuming H2 runs an sshd, which is activated
by default in openSUSE. The command ssh 192.168.1.4 can also be run as a
normal user.
558
Reference
33.2.6 Troubleshooting
If you have difficulties establishing a connection, proceed according to the following
list. Remember that the error can be on either side of a connection or even on both sides.
If possible, reconstruct the problem with another Bluetooth device to verify that the
device is not defective.
Is the local device listed in the output of hcitool dev?
If the local device is not listed in this output, hcid is not started or the device is not
recognized as a Bluetooth device. This can have various causes. The device may
be defective or the correct driver may be missing. Laptops with built-in Bluetooth
often have an on and off switch for wireless devices, like WLAN and Bluetooth.
Check the manual of your laptop to see if your device has such a switch. Restart
the Bluetooth system with the command rcbluetooth restart and check
if any errors are reported in /var/log/messages.
Does your Bluetooth adapter need a firmware file?
If it does, install bluez-bluefw and restart the Bluetooth system with
rcbluetooth restart.
Does the output of hcitool inq return other devices?
Test this command more than once. The connection may have interferences, because
the frequency band of Bluetooth is also used by other devices.
Can the remote device see your computer?
Try to establish the connection from the remote device. Check if this device sees
the computer.
Can a network connection be established (see Section 33.2.5, Example Establishing
a Network Connection via Bluetooth (page 558))?
The setup described in Section 33.2.5, Example Establishing a Network Connection via Bluetooth (page 558) may not work for several reasons. For example,
one of the two computers may not support SSH. Try ping 192.168.1.3 or
ping 192.168.1.4. If this works, check if sshd is active. Another problem
could be that one of the two devices already has network settings that conflict with
the address 192.168.1.X in the example. If this is the case, try different addresses, such as 10.123.1.2 and 10.123.1.3.
Wireless Communication
559
If you have installed the bluez-hcidump package, you can use hcidump -X to
check what is sent between the devices. Sometimes the output helps give a hint where
the problem is, but be aware of the fact that it is only partly in clear text.
560
Reference
33.3.1 Software
The necessary kernel modules are included in the kernel package. The package irda
provides the necessary helper applications for supporting the infrared interface. Find
the documentation at /usr/share/doc/packages/irda/README after the installation of the package.
33.3.2 Configuration
The IrDA system service is not started automatically when the system is booted. Use
the YaST IrDA module for activation. Only one setting can be modified in this module:
the serial interface of the infrared device. The test window shows two outputs. One is
the output of irdadump, which logs all sent and received IrDA packets. This output
should contain the name of the computer and the names of all infrared devices in
transmission range. An example for these messages is shown in Section 33.3.4,
Troubleshooting (page 562). All devices to which an IrDA connection exists are listed
in the lower part of the window.
IrDA consumes a considerable amount of battery power, because a discovery packet
is sent every few seconds to detect other peripheral devices. Therefore, IrDA should
only be started when necessary if you depend on battery power. Enter the command
rcirda start to activate it or rcirda stop to deactivate it. All needed kernel
modules are loaded automatically when the interface is activated.
If preferred, configure manually in the file /etc/sysconfig/irda. This file contains only one variable, IRDA_PORT, which determines the interface to use in SIR
mode.
33.3.3 Usage
Data can be sent to the device file /dev/irlpt0 for printing. The device file /dev/
irlpt0 acts just like the normal /dev/lp0 cabled interface, except the printing data
is sent wirelessly with infrared light. For printing, make sure that the printer is in visual
range of the computer's infrared interface and the infrared support is started.
A printer that is operated over the infrared interface can be configured with the YaST
printer module. Because it is not detected automatically, configure it manually by
Wireless Communication
561
clicking Add > Directly Connected Printers. Select IrDA Printer and click Next to
configure the printer device. Usually, irlpt0 is the right connection. Click Finish to
apply your settings. Details about operating printers in Linux are available in Chapter 7,
Printer Operation (page 97).
Communication with other hosts and with mobile phones or other similar devices is
conducted through the device file /dev/ircomm0. The Siemens S25 and Nokia 6210
mobile phones, for example, can dial and connect to the Internet with the wvdial application using the infrared interface. Synchronizing data with a Palm Pilot is also possible,
provided the device setting of the corresponding application has been set to /dev/
ircomm0.
If you want, you can address only devices that support the printer or IrCOMM protocols.
Devices that support the IROBEX protocol, such as the 3Com Palm Pilot, can be accessed with special applications, like irobexpalm and irobexreceive. Refer to the IRHOWTO (http://tldp.org/HOWTO/Infrared-HOWTO/) for information.
The protocols supported by the device are listed in brackets after the name of the device
in the output of irdadump. IrLAN protocol support is still a work in progress.
33.3.4 Troubleshooting
If devices connected to the infrared port do not respond, use the command irdadump
(as root) to check if the other device is recognized by the computer. Something similar
to Example 33.1, Output of irdadump (page 562) appears regularly when a Canon
BJC-80 printer is in visible range of the computer:
Example 33.1 Output of irdadump
21:41:38.435239
21:41:38.525167
21:41:38.615159
21:41:38.705178
21:41:38.795198
21:41:38.885163
21:41:38.965133
xid:cmd
xid:cmd
xid:cmd
xid:cmd
xid:cmd
xid:cmd
xid:rsp
Check the configuration of the interface if there is no output or the other device does
not reply. Verify that the correct interface is used. The infrared interface is sometimes
located at /dev/ttyS2 or at /dev/ttyS3 and an interrupt other than IRQ 3 is
562
Reference
sometimes used. These settings can be checked and modified in the BIOS setup menu
of almost every laptop.
A simple video camera can also help in determining whether the infrared LED lights
up at all. Most video cameras can see infrared light; the human eye cannot.
Wireless Communication
563
34
openSUSE comes with support for Tablet PCs with serial Wacom devices (such as
IBM/Lenovo X41, ACER TM C300/C301/C302 series, Fujitsu Lifebook T series
(T3010/T4010), HP Compaq TC4200, Motion M1200), with FinePoint devices (such
as Gateway Tablet PCs), and Fujitsu Siemens Computers P-Series. Learn how to install
and configure your Tablet PC and discover some useful Linux* applications which
accept input from digital pens.
After you have installed the Tablet PC packages and configured your digitizer correctly,
input with the pen, also called a stylus, can be used for the following actions and applications:
Logging in to KDM or GDM
Unlocking your screen on the KDE and GNOME desktops
Actions that can also be triggered by other pointing devices (such as mouse or touch
pad), for example, moving the cursor on the screen, starting applications, closing,
resizing and moving windows, shifting window focus, dragging and dropping objects
Using gesture recognition in applications of the X Window System
Drawing with The GIMP
Taking notes or sketching with applications like Jarnal or Xournal or editing larger
amounts of text with Dasher
565
566
Reference
567
If you want to use xvkbd after login, start it from the main menu or with xvkbd from
a shell.
568
Reference
that resembles the Graffiti* alphabet. When activated, xstroke sends the input to the
currently focused window.
1 Start xstroke from the main menu or with xstroke from a shell. This adds a
pencil icon to your system tray.
2 Start the application for which you want to create text input with the pen (for
example, a terminal window, a text editor, or an OpenOffice.org Writer).
3 To activate the gesture recognition mode, click the pencil icon once.
4 Perform some gestures on the graphics tablet with the pen or another pointing
device. xstroke captures the gestures and transfers them to text that appears in
the application window that has the focus.
5 To switch focus to a different window, click the desired window with the pen
and hold for a moment (or use the keyboard shortcut defined in your desktop's
control center).
6 To deactivate the gesture recognition mode, click the pencil icon again.
569
Dasher is another useful application. It was designed for situations where keyboard input
is impractical or unavailable. With a bit of training, you can rapidly enter larger amounts
of text using only the pen (or other input devicesit can even be driven with an eye
tracker).
Start Dasher from the main menu or with dasher from a shell. Move your pen in one
direction and the application starts to zoom into the letters on the right side. From the
letters passing the cross hairs in the middle, the text is created or predicted and is
printed to the upper part of the window. To stop or start writing, click the display once
with the pen. Modify the zooming speed at the bottom of the window.
570
Reference
The Dasher concept works for many languages. For more information, refer to the
Dasher Web site, which offers comprehensive documentation, demonstrations and
training texts. Find it at http://www.inference.phy.cam.ac.uk/dasher/
34.7 Troubleshooting
Virtual Keyboard Does Not Appear on Login Screen
Occasionally, the virtual keyboard is not displayed on the login screen. To solve
this, restart the X server by pressing Ctrl + Alt + < or press the appropriate key
on your Tablet PC (if you use a slate model without integrated keyboard). If the
virtual keyboard still does not show, connect an external keyboard to your slate
model and log in using the hardware keyboard.
571
Note that the commands above depend on the contents of your /etc/X11/xorg
.conf configuration file. If you have configured your device with SaX2 as described in Section 34.2, Configuring Your Tablet Device (page 567), the commands
should work as they are written. If you have changed the Identifier of the
tablet stylus input device in xorg.conf manually, replace "Mouse[7]" with
the new Identifier.
572
Reference
573
35
Whenever Linux is used in a networked environment, you can use the kernel functions
that allow the manipulation of network packets to maintain a separation between internal
and external network areas. The Linux netfilter framework provides the means to establish an effective firewall that keeps different networks apart. With the help of iptablesa
generic table structure for the definition of rule setsprecisely control the packets allowed to pass a network interface. Such a packet filter can be set up quite easily with
the help of SuSEfirewall2 and the corresponding YaST module.
577
nat
This table defines any changes to the source and target addresses of packets. Using
these functions also allows you to implement masquerading, which is a special
case of NAT used to link a private network with the Internet.
mangle
The rules held in this table make it possible to manipulate values stored in IP
headers (such as the type of service).
These tables contain several predefined chains to match packets:
PREROUTING
This chain is applied to incoming packets.
INPUT
This chain is applied to packets destined for the system's internal processes.
FORWARD
This chain is applied to packets that are only routed through the system.
OUTPUT
This chain is applied to packets originating from the system itself.
POSTROUTING
This chain is applied to all outgoing packets.
Figure 35.1, iptables: A Packet's Possible Paths (page 579) illustrates the paths along
which a network packet may travel on a given system. For the sake of simplicity, the
figure lists tables as parts of chains, but in reality these chains are held within the tables
themselves.
In the simplest of all possible cases, an incoming packet destined for the system itself
arrives at the eth0 interface. The packet is first referred to the PREROUTING chain
of the mangle table then to the PREROUTING chain of the nat table. The following
step, concerning the routing of the packet, determines that the actual target of the
packet is a process of the system itself. After passing the INPUT chains of the mangle
and the filter table, the packet finally reaches its target, provided that the rules of
the filter table are actually matched.
578
Reference
incoming packet
mangle
nat
INPUT
mangle
Routing
filter
FORWARD
Processes
mangle
in the local
system
filter
OUTPUT
Routing
mangle
nat
filter
POSTROUTING
mangle
nat
outgoing packet
579
580
Reference
As a consequence of all this, you might experience some problems with a number of
application protocols, such as ICQ, cucme, IRC (DCC, CTCP), and FTP (in PORT
mode). Web browsers, the standard FTP program, and many other programs use the
PASV mode. This passive mode is much less problematic as far as packet filtering and
masquerading are concerned.
581
35.4 SuSEfirewall2
SuSEfirewall2 is a script that reads the variables set in /etc/sysconfig/
SuSEfirewall2 to generate a set of iptables rules. It defines three security zones,
although only the first and the second one are considered in the following sample configuration:
External Zone
Given that there is no way to control what is happening on the external network,
the host needs to be protected from it. In most cases, the external network is the
Internet, but it could be another insecure network, such as a WLAN.
Internal Zone
This refers to the private network, in most cases the LAN. If the hosts on this network use IP addresses from the private range (see Section 20.1.2, Netmasks and
Routing (page 303)), enable network address translation (NAT), so hosts on the
internal network can access the external one.
Demilitarized Zone (DMZ)
While hosts located in this zone can be reached both from the external and the internal network, they cannot access the internal network themselves. This setup can
be used to put an additional line of defense in front of the internal network, because
the DMZ systems are isolated from the internal network.
Any kind of network traffic not explicitly allowed by the filtering rule set is suppressed
by iptables. Therefore, each of the interfaces with incoming traffic must be placed into
one of the three zones. For each of the zones, define the services or protocols allowed.
The rule set is only applied to packets originating from remote hosts. Locally generated
packets are not captured by the firewall.
The configuration can be performed with YaST (see Section 35.4.1, Configuring the
Firewall with YaST (page 583)). It can also be made manually in the file /etc/
sysconfig/SuSEfirewall2, which is well commented. Additionally, a number
of example scenarios are available in /usr/share/doc/packages/
SuSEfirewall2/EXAMPLES.
582
Reference
583
Interfaces
All known network interfaces are listed here. To remove an interface from a zone,
select the interface, press Change, and choose No Zone Assigned. To add an interface
to a zone, select the interface, press Change and choose any of the available zones.
You may also create a special interface with your own settings by using Custom.
Allowed Services
You need this option to offer services from your system to a zone from which it is
protected. By default, the system is only protected from external zones. Explicitly
allow the services that should be available to external hosts. After selecting the
desired zone in Allowed Services for Selected Zone, activate the services from the
list.
Masquerading
Masquerading hides your internal network from external networks, such as the Internet, while enabling hosts in the internal network to access the external network
transparently. Requests from the external network to the internal one are blocked
and requests from the internal network seem to be issued by the masquerading
server when seen externally. If special services of an internal machine need to be
available to the external network, add special redirect rules for the service.
Broadcast
In this dialog, configure the UDP ports that allow broadcasts. Add the required
port numbers or services to the appropriate zone, separated by spaces. See also the
file /etc/services.
The logging of broadcasts that are not accepted can be enabled here. This may be
problematic, because Windows hosts use broadcasts to know about each other and
so generate many packets that are not accepted.
IPsec Support
Configure whether the IPsec service should be available to the external network in
this dialog. Configure which packets are trusted under Details.
Logging Level
There are two rules for the logging: accepted and not accepted packets. Packets
that are not accepted are DROPPED or REJECTED. Select from Log All, Log
Only Critical, or Do Not Log Any for both of them.
Custom Rules
Here set special firewall rules that allow connections, matching specified citeria.
584
Reference
When completed with the firewall configuration, exit this dialog with Next. A zoneoriented summary of your firewall configuration then opens. In it, check all settings.
All services, ports, and protocols that have been allowed are listed in this summary. To
modify the configuration, use Back. Press Accept to save your configuration.
585
FW_MASQUERADE (masquerading)
Set this to yes if you need the masquerading function. This provides a virtually
direct connection to the Internet for the internal hosts. It is more secure to have a
proxy server between the hosts of the internal network and the Internet. Masquerading is not needed for services a proxy server provides.
FW_MASQ_NETS (masquerading)
Specify the hosts or networks to masquerade, leaving a space between the individual entries. For example:
FW_MASQ_NETS="192.168.0.0/24 192.168.10.1"
FW_PROTECT_FROM_INT (firewall)
Set this to yes to protect your firewall host from attacks originating in your internal
network. Services are only available to the internal network if explicitly enabled.
Also see FW_SERVICES_INT_TCP and FW_SERVICES_INT_UDP.
FW_SERVICES_EXT_TCP (firewall)
Enter the TCP ports that should be made available. Leave this blank for a normal
workstation at home that should not offer any services.
FW_SERVICES_EXT_UDP (firewall)
Leave this blank unless you run a UDP service and want to make it available to
the outside. The services that use UDP include include DNS servers, IPsec, TFTP,
DHCP and others. In that case, enter the UDP ports to use.
FW_SERVICES_INT_TCP (firewall)
With this variable, define the services available for the internal network. The notation is the same as for FW_SERVICES_EXT_TCP, but the settings are applied to
the internal network. The variable only needs to be set if
FW_PROTECT_FROM_INT is set to yes.
FW_SERVICES_INT_UDP (firewall)
See FW_SERVICES_INT_TCP.
After configuring the firewall, test your setup. The firewall rule sets are created by entering SuSEfirewall2 start as root. Then use telnet, for example, from an
external host to see whether the connection is actually denied. After that, review /var/
log/messages, where you should see something like this:
Mar 15 13:21:38 linux kernel: SFW2-INext-DROP-DEFLT IN=eth0
OUT= MAC=00:80:c8:94:c3:e7:00:a0:c9:4d:27:56:08:00 SRC=192.168.10.0
586
Reference
Other packages to test your firewall setup are nmap or nessus. The documentation of
nmap is found at /usr/share/doc/packages/nmap and the documentation of
nessus resides in the directory /usr/share/doc/packages/nessus-core
after installing the respective package.
587
36
SSHSecure Network
Operations
With more and more computers installed in networked environments, it often becomes
necessary to access hosts from a remote location. This normally means that a user sends
login and password strings for authentication purposes. As long as these strings are
transmitted as plain text, they could be intercepted and misused to gain access to that
user account without the authorized user even knowing about it. Apart from the fact
that this would open all the user's files to an attacker, the illegal account could be used
to obtain administrator or root access or to penetrate other systems. In the past, remote
connections were established with telnet, which offers no guards against eavesdropping
in the form of encryption or other security mechanisms. There are other unprotected
communication channels, like the traditional FTP protocol and some remote copying
programs.
The SSH suite provides the necessary protection by encrypting the authentication strings
(usually a login name and a password) and all the other data exchanged between the
hosts. With SSH, the data flow could still be recorded by a third party, but the contents
are encrypted and cannot be reverted to plain text unless the encryption key is known.
So SSH enables secure communication over insecure networks, such as the Internet.
The SSH flavor that comes with openSUSE is OpenSSH.
589
Quotation marks are necessary here to send both instructions with one command. It is
only by doing this that the second command is executed on venus.
590
Reference
including all subdirectories to the backup directory on the host venus. If this subdirectory does not exist yet, it is created automatically.
The option -p tells scp to leave the time stamp of files unchanged. -C compresses the
data transfer. This minimizes the data volume to transfer, but creates a heavier burden
on the processor.
591
Override this to use version 1 of the protocol with the -1 switch. To continue using
version 1 after a system update, follow the instructions in /usr/share/doc/
packages/openssh/README.SuSE. This document also describes how an SSH 1
environment can be transformed into a working SSH 2 environment with just a few
steps.
When using version 1 of SSH, the server sends its public host key and a server key,
which is regenerated by the SSH daemon every hour. Both allow the SSH client to encrypt a freely chosen session key, which is sent to the SSH server. The SSH client also
tells the server which encryption method (cipher) to use.
Version 2 of the SSH protocol does not require a server key. Both sides use an algorithm
according to Diffie-Helman to exchange their keys.
The private host and server keys are absolutely required to decrypt the session key and
cannot be derived from the public parts. Only the SSH daemon contacted can decrypt
the session key using its private keys (see man
/usr/share/doc/packages/openssh/RFC.nroff). This initial connection
phase can be watched closely by turning on the verbose debugging option -v of the
SSH client.
The client stores all public host keys in ~/.ssh/known_hosts after its first contact
with a remote host. This prevents any man-in-the-middle attacksattempts by foreign
SSH servers to use spoofed names and IP addresses. Such attacks are detected either
by a host key that is not included in ~/.ssh/known_hosts or by the server's inability to decrypt the session key in the absence of an appropriate private counterpart.
It is recommended to back up the private and public keys stored in /etc/ssh/ in a
secure, external location. This way, key modifications can be detected and the old ones
can be used again after a reinstallation. This spares users any unsettling warnings. If it
is verified that, despite the warning, it is indeed the correct SSH server, the existing
entry for the system must be removed from ~/.ssh/known_hosts.
Reference
this by way of another key pair, which is generated by the user. The SSH package
provides a helper program for this: ssh-keygen. After entering ssh-keygen -t rsa
or ssh-keygen -t dsa, the key pair is generated and you are prompted for the base
filename in which to store the keys.
Confirm the default setting and answer the request for a passphrase. Even if the software
suggests an empty passphrase, a text from 10 to 30 characters is recommended for the
procedure described here. Do not use short and simple words or phrases. Confirm by
repeating the passphrase. Subsequently, you will see where the private and public keys
are stored, in this example, the files id_rsa and id_rsa.pub.
Use ssh-keygen -p -t rsa or ssh-keygen -p -t dsa to change your old
passphrase. Copy the public key component (id_rsa.pub in the example) to the remote machine and save it to ~/.ssh/authorized_keys. You will be asked to
authenticate yourself with your passphrase the next time you establish a connection. If
this does not occur, verify the location and contents of these files.
In the long run, this procedure is more troublesome than giving your password each
time. Therefore, the SSH package provides another tool, ssh-agent, which retains the
private keys for the duration of an X session. The entire X session is started as a child
process of ssh-agent. The easiest way to do this is to set the variable usessh at the
beginning of the .xsession file to yes and log in via a display manager, such as
KDM or XDM. Alternatively, enter ssh-agent startx.
Now you can use ssh or scp as usual. If you have distributed your public key as described
above, you are no longer prompted for your password. Take care of terminating your
X session or locking it with a password protection application, such as xlock.
All the relevant changes that resulted from the introduction of version 2 of the SSH
protocol are also documented in the file /usr/share/doc/packages/openssh/
README.SuSE.
593
remote machine over the existing SSH connection. At the same time, X applications
started remotely and locally viewed with this method cannot be intercepted by unauthorized individuals.
By adding the option -A, the ssh-agent authentication mechanism is carried over to the
next machine. This way, you can work from different machines without having to enter
a password, but only if you have distributed your public key to the destination hosts
and properly saved it there.
Both mechanisms are deactivated in the default settings, but can be permanently activated at any time in the systemwide configuration file /etc/ssh/sshd_config
or the user's ~/.ssh/config.
ssh can also be used to redirect TCP/IP connections. In the examples below, SSH is
told to redirect the SMTP and the POP3 port, respectively:
ssh -L 25:mail.example.com:25 jupiter.example.com
Both commands must be executed as root, because the connection is made to privileged
local ports. E-mail is sent and retrieved by normal users in an existing SSH connection.
The SMTP and POP3 host must be set to localhost for this to work. Additional information can be found in the manual pages for each of the programs described above
and also in the files under /usr/share/doc/packages/openssh.
594
Reference
1 Select the ports sshd should listen on in the SSHD TCP Ports table. The default
port number is 22. Multiple ports are allowed. To add a new port, click Add, enter
the port number and click OK. To delete port, select it in the table, click Delete
and confirm.
2 Select the features the sshd daemon should support. To disable TCP forwarding,
uncheck Allow TCP Forwarding. Disabling TCP forwarding does not improve
security unless users are also denied shell access, as they can always install their
own forwarders. See Section 36.7, X, Authentication, and Forwarding Mechanisms (page 593) for more information about TCP forwarding.
To disable X forwarding, uncheck Allow X11 Forwarding. If this option is disabled, any X11 forward requests by the client will return an error. However users
can always install their own forwarders. See Section 36.7, X, Authentication,
and Forwarding Mechanisms (page 593) for more information about X forwarding.
In Allow Compression determine, whether the connection between the server and
clients should be compressed. After setting these options, click Next.
3 In Print Message of the day After Login determine, whether sshd should print
message from /etc/motd when a user logs in interactively. If you want to
disable connection of a user root, uncheck Permit Root Login.
In Maximum Authentication Tries enter the maximum allowed number of authentication attempts per connection. Password Authentication specifies whether
password authentication is allowed. RSA Authentication specifies whether pure
RSA authentication is allowed. This option applies to SSH protocol version 1
only. Public Key Authentication specifies whether public key authentication is
allowed. This option applies to protocol version 2 only.
4 Click Accept to save the configuration.
595
37
An increasing number of authentication mechanisms are based on cryptographic procedures. Digital certificates that assign cryptographic keys to their owners play an important
role in this context. These certificates are used for communication and can also be
found, for example, on company ID cards. The generation and administration of certificates is mostly handled by official institutions that offer this as a commercial service.
In some cases, however, it may make sense to carry out these tasks yourself, for example,
if a company does not wish to pass personal data to third parties.
YaST provides two modules for certification, which offer basic management functions
for digital X.509 certificates. The following sections explain the basics of digital certification and how to use YaST to create and administer certificates of this type. For more
detailed information, refer to http://www.ietf.org/html.charters/
pkix-charter.html.
597
Private Key
The private key must be kept safely by the key owner. Accidental publication of
the private key compromises the key pair and renders it useless.
Public Key
The key owner circulates the public key for use by third parties.
598
Reference
X.509v3 Certificate
Field
Content
Version
Serial Number
Signature
Issuer
Validity
Period of validity
Subject
Issuer Unique ID
Subject Unique ID
599
Field
Content
Extensions
600
Field
Content
Version
Signature
Issuer
This Update
Next Update
Reference
Field
Content
Extensions
601
602
Reference
CA Name
Enter the technical name of the CA. Directory names, among other things,
are derived from this name, which is why only the characters listed in the
help can be used. The technical name is also displayed in the overview when
the module is started.
Common Name
Enter the name to use to refer to the CA.
E-Mail Addresses
Several e-mail addresses can be entered that can be seen by the CA user.
This can be helpful for inquiries.
Country
Select the country where the CA is operated.
Organisation, Organisational Unit, Locality, State
Optional values
4 Click Next.
5 Enter a password in the second dialog. This password is always required when
using the CAwhen creating a sub-CA or generating certificates. The text fields
have the following meaning:
Key Length
Key Length contains a meaningful default and does not generally need to be
changed unless an application cannot deal with this key length.
Valid Period (days)
The Valid Period in the case of a CA defaults to 3650 days (roughly ten
years). This long period makes sense because the replacement of a deleted
CA involves an enormous administrative effort.
Clicking Advanced Options opens a dialog for setting different attributes from
the X.509 extensions (Figure 37.4, YaST CA ModuleExtended Settings
(page 608)). These values have rational default settings and should only be changed
if you are really sure of what you are doing.
6 YaST displays the current settings for confirmation. Click Create. The root CA
is created then appears in the overview.
603
TIP
In general, it is best not to allow user certificates to be issued by the root CA.
It is better to create at least one sub-CA and create the user certificates from
there. This has the advantage that the root CA can be kept isolated and secure,
for example, on an isolated computer on secure premises. This makes it very
difficult to attack the root CA.
604
Reference
4 Click Advanced and select Create SubCA. This opens the same dialog as for
creating a root CA.
5 Proceed as described in Section 37.2.1, Creating a Root CA (page 602).
6 Select the tab Certificates. Reset compromised or otherwise unwanted sub-CAs
here using Revoke. Revocation is not enough to deactivate a sub-CA on its own.
Also publish revoked sub-CAs in a CRL. The creation of CRLs is described in
Section 37.2.5, Creating CRLs (page 609).
7 Finish with Ok
605
the e-mail address of the recipient (the public key owner) to be included in the certificate.
In the case of server and client certificates, the hostname of the server must be entered
in the Common Name field. The default validity period for certificates is 365 days.
To create client and server certificates, do the following:
1 Start YaST and open the CA module.
2 Select the required CA and click Enter CA.
3 Enter the password if entering a CA for the first time. YaST displays the CA key
information in the Description tab.
4 Click Certificates (see Figure 37.3, Certificates of a CA (page 606)).
Figure 37.3 Certificates of a CA
5 Click Add > Add Server Certificate and create a server certificate.
6 Click Add > Add Client Certificate and create a client certificate. Do not forget
to enter an e-mail address.
7 Finish with Ok
606
Reference
607
5 Change the associated value on the right side and set or delete the critical setting
with critical.
6 Click Next to see a short summary.
7 Finish your changes with Save.
TIP
All changes to the defaults only affect objects created after this point. Already
existing CAs and certificates remain unchanged.
608
Reference
609
must be entered manually. You must always enter several passwords (see Table 37.3,
Passwords during LDAP Export (page 610)).
Table 37.3
Password
Meaning
LDAP Password
Certificate Password
610
Reference
611
TIP
If you select Import here, you can select the source in the file system. This option can also be used to import certificates from a transport medium, such as
a USB stick.
To import a common server certificate, do the following:
1 Start YaST and open Common Server Certificate under Security and Users
2 View the data for the current certificate in the description field after YaST has
been started.
3 Select Import and the certificate file.
4 Enter the password and click Next. The certificate is imported then displayed in
the description field.
5 Close YaST with Finish.
612
Reference
Network
AuthenticationKerberos
38
An open network provides no means to ensure that a workstation can identify its users
properly except the usual password mechanisms. In common installations, the user
must enter the password each time a service inside the network is accessed. Kerberos
provides an authentication method with which a user registers once then is trusted in
the complete network for the rest of the session. To have a secure network, the following
requirements must be met:
Have all users prove their identity for each desired service and make sure that no
one can take the identity of someone else.
Make sure that each network server also proves its identity. Otherwise an attacker
might be able to impersonate the server and obtain sensitive information transmitted
to the server. This concept is called mutual authentication, because the client authenticates to the server and vice versa.
Kerberos helps you meet these requirements by providing strongly encrypted authentication. The following shows how this is achieved. Only the basic principles of Kerberos
are discussed here. For detailed technical instruction, refer to the documentation provided
with your implementation of Kerberos.
Network AuthenticationKerberos
613
credential
Users or clients need to present some kind of credentials that authorize them to request services. Kerberos knows two kinds of credentialstickets and authenticators.
ticket
A ticket is a per-server credential used by a client to authenticate at a server from
which it is requesting a service. It contains the name of the server, the client's name,
the client's Internet address, a time stamp, a lifetime, and a random session key.
All this data is encrypted using the server's key.
authenticator
Combined with the ticket, an authenticator is used to prove that the client presenting
a ticket is really the one it claims to be. An authenticator is built of the client's
name, the workstation's IP address, and the current workstation's time all encrypted
with the session key only known to the client and the server from which it is requesting a service. An authenticator can only be used once, unlike a ticket. A client
can build an authenticator itself.
principal
A Kerberos principal is a unique entity (a user or service) to which it can assign a
ticket. A principal consists of the following components:
Primarythe first part of the principal, which can be the same as your username
in the case of a user.
Instancesome optional information characterizing the primary. This string is
separated from the primary by a /.
Realmthis specifies your Kerberos realm. Normally, your realm is your domain
name in uppercase letters.
mutual authentication
Kerberos ensures that both client and server can be sure of each others identity.
They share a session key, which they can use to communicate securely.
session key
Session keys are temporary private keys generated by Kerberos. They are known
to the client and used to encrypt the communication between the client and the
server for which it requested and received a ticket.
614
Reference
replay
Almost all messages sent in a network can be eavesdropped, stolen, and resent. In
the Kerberos context, this would be most dangerous if an attacker manages to obtain
your request for a service containing your ticket and authenticator. He could then
try to resend it (replay) to impersonate you. However, Kerberos implements several
mechanisms to deal with that problem.
server or service
Service is used to refer to a specific action to perform. The process behind this action
is referred to as a server.
Network AuthenticationKerberos
615
616
Reference
Network AuthenticationKerberos
617
618
Reference
Network AuthenticationKerberos
619
39
621
622
Reference
623
Kerberos services via DNS. To do so, it is helpful if your realm name is a subdomain
of your DNS domain name.
Unlike the DNS name space, Kerberos is not hierarchical. You cannot set up a realm
named EXAMPLE.COM, have two subrealms named DEVELOPMENT and
ACCOUNTING underneath it, and expect the two subordinate realms to somehow inherit
principals from EXAMPLE.COM. Instead, you would have three separate realms for
which you would have to configure crossrealm authentication for users from one realm
to interact with servers or other users from another realm.
For the sake of simplicity, assume you are setting up just one realm for your entire organization. For the remainder of this section, the realm name EXAMPLE.COM is used
in all examples.
624
Reference
5 Configure /etc/nsswitch.conf to use only local files for user and group
lookup. Change the lines for passwd and group to look like this:
passwd:
group:
files
files
Edit the passwd, group, and shadow files in /etc and remove the lines that
start with a + character (these are for NIS lookups).
6 Disable all user accounts except root's account by editing /etc/shadow and
replacing the hashed passwords with * or ! characters.
625
It is also possible to adjust the maximum deviation Kerberos allows when checking
time stamps. This value (called clock skew) can be set in the krb5.conf file as described in Section Adjusting the Clock Skew (page 633).
626
Reference
7 Create a Principal for Yourself You need a principal for yourself. Refer to
Section 39.5.2, Creating a Principal (page 628) for details.
This shows that there are now a number of principals in the database. All of these are
for internal use by Kerberos.
627
Reference
629
To configure ticket-related options in the Advanced Settings dialog, choose from the
following options:
Specify the Default Ticket Lifetime and the Default Renewable Lifetime in days,
hours, or minutes (using the units of measurement d, h, and m, with no blank space
between the value and the unit).
To forward your complete identity to use your tickets on other hosts, select Forwardable.
Enable the transfer of certain tickets by selecting Proxiable.
Keep tickets available with a PAM module even after a session has ended by enabling Retained.
Enable Kerberos authentication support for your OpenSSH client by selecting the
corresponding check box. The client then uses Kerberos tickets to authenticate with
the SSH server.
Exclude a range of user accounts from using Kerberos authentication by providing
a value for the Minimum UID that a user of this feature must have. For instance,
you may want to exclude the system administrator (root).
Use Clock Skew to set a value for the allowable difference between the time stamps
and your host's system time.
To keep the system time in sync with an NTP server, you can also set up the host
as an NTP client by selecting NTP Configuration, which opens the YaST NTP
client dialog that is described in Section 24.1, Configuring an NTP Client with
YaST (page 393). After finishing the configuration, YaST performs all the necessary
changes and the Kerberos client is ready for use.
630
Reference
631
Static Configuration
One way to configure Kerberos is to edit the configuration file /etc/krb5.conf.
The file installed by default contains various sample entries. Erase all of these entries
before starting. krb5.conf is made up of several sections, each introduced by the
section name included in brackets like [this].
To configure your Kerberos clients, add the following stanza to krb5.conf (where
kdc.example.com is the hostname of the KDC):
[libdefaults]
default_realm = EXAMPLE.COM
[realms]
EXAMPLE.COM = {
kdc = kdc.example.com
admin_server = kdc.example.com
}
The default_realm line sets the default realm for Kerberos applications. If you
have several realms, just add additional statements to the [realms] section.
Also add a statement to this file that tells applications how to map hostnames to a realm.
For example, when connecting to a remote host, the Kerberos library needs to know in
which realm this host is located. This must be configured in the [domain_realms]
section:
[domain_realm]
.example.com = EXAMPLE.COM
www.foobar.com = EXAMPLE.COM
This tells the library that all hosts in the example.com DNS domains are in the
EXAMPLE.COM Kerberos realm. In addition, one external host named www.foobar
.com should also be considered a member of the EXAMPLE.COM realm.
DNS-Based Configuration
DNS-based Kerberos configuration makes heavy use of SRV records. See (RFC2052)
A DNS RR for specifying the location of services at http://www.ietf.org.
These records are not supported in earlier implementations of the BIND name server.
At least BIND version 8 is required for this.
632
Reference
The name of an SRV record, as far as Kerberos is concerned, is always in the format
_service._proto.realm, where realm is the Kerberos realm. Domain names in
DNS are case insensitive, so case-sensitive Kerberos realms would break when using
this configuration method. _service is a service name (different names are used
when trying to contact the KDC or the password service, for example). _proto can
be either _udp or _tcp, but not all services support both protocols.
The data portion of SRV resource records consists of a priority value, a weight, a port
number, and a hostname. The priority defines the order in which hosts should be tried
(lower values indicate a higher priority). The weight is there to support some sort of
load balancing among servers of equal priority. You probably do not need any of this,
so it is okay to set these to zero.
MIT Kerberos currently looks up the following names when looking for services:
_kerberos
This defines the location of the KDC daemon (the authentication and ticket granting
server). Typical records look like this:
_kerberos._udp.EXAMPLE.COM.
_kerberos._tcp.EXAMPLE.COM.
IN
IN
SRV
SRV
0 0 88 kdc.example.com.
0 0 88 kdc.example.com.
_kerberos-adm
This describes the location of the remote administration service. Typical records
look like this:
_kerberos-adm._tcp.EXAMPLE.COM. IN
SRV
0 0 749 kdc.example.com.
Because kadmind does not support UDP, there should be no _udp record.
As with the static configuration file, there is a mechanism to inform clients that a specific host is in the EXAMPLE.COM realm, even if it is not part of the example.com
DNS domain. This can be done by attaching a TXT record to _keberos.hostname,
as shown here:
_keberos.www.foobar.com.
IN TXT "EXAMPLE.COM"
633
utes). This means a ticket can have a time stamp somewhere between five minutes ago
and five minutes in the future from the server's point of view.
When using NTP to synchronize all hosts, you can reduce this value to about one minute.
The clock skew value can be set in /etc/krb5.conf like this:
[libdefaults]
clockskew = 120
Replace the username newbie with your own. Restart kadmind for the change to take
effect.
You should now be able to perform Kerberos administration tasks remotely using the
kadmin tool. First, obtain a ticket for your admin role and use that ticket when connecting
to the kadmin server:
kadmin -p newbie/admin
Authenticating as principal newbie/[email protected] with password.
Password for newbie/[email protected]:
kadmin: getprivs
current privileges: GET ADD MODIFY DELETE
kadmin:
Using the getprivs command, verify which privileges you have. The list shown
above is the full set of privileges.
As an example, modify the principal newbie:
634
Reference
kadmin -p newbie/admin
Authenticating as principal newbie/[email protected] with password.
Password for newbie/[email protected]:
kadmin: getprinc newbie
Principal: [email protected]
Expiration date: [never]
Last password change: Wed Jan 12 17:28:46 CET 2005
Password expiration date: [none]
Maximum ticket life: 0 days 10:00:00
Maximum renewable life: 7 days 00:00:00
Last modified: Wed Jan 12 17:47:17 CET 2005 (admin/[email protected])
Last successful authentication: [never]
Last failed authentication: [never]
Failed password attempts: 0
Number of keys: 2
Key: vno 1, Triple DES cbc mode with HMAC/sha1, no salt
Key: vno 1, DES cbc mode with CRC-32, no salt
Attributes:
Policy: [none]
kadmin: modify_principal -maxlife "8 hours" newbie
Principal "[email protected]" modified.
kadmin: getprinc joe
Principal: [email protected]
Expiration date: [never]
Last password change: Wed Jan 12 17:28:46 CET 2005
Password expiration date: [none]
Maximum ticket life: 0 days 08:00:00
Maximum renewable life: 7 days 00:00:00
Last modified: Wed Jan 12 17:59:49 CET 2005 (newbie/[email protected])
Last successful authentication: [never]
Last failed authentication: [never]
Failed password attempts: 0
Number of keys: 2
Key: vno 1, Triple DES cbc mode with HMAC/sha1, no salt
Key: vno 1, DES cbc mode with CRC-32, no salt
Attributes:
Policy: [none]
kadmin:
This changes the maximum ticket life time to eight hours. For more information about
the kadmin command and the options available, refer to http://web.mit.edu/
kerberos/www/krb5-1.4/krb5-1.4/doc/krb5-admin.html#Kadmin
%20Options or look at man 8 kadmin.
635
Service
host
nfs
HTTP
imap
IMAP
pop
POP3
ldap
LDAP
Service principals are similar to user principals, but have significant differences. The
main difference between a user principal and a service principal is that the key of the
former is protected by a passwordwhen a user obtains a ticket-granting ticket from
the KDC, he needs to type his password so Kerberos can decrypt the ticket. It would
be quite inconvenient for the system administrator if he had to obtain new tickets for
the SSH daemon every eight hours or so.
636
Reference
Instead, the key required to decrypt the initial ticket for the service principal is extracted
by the administrator from the KDC once and stored in a local file called the keytab.
Services such the SSH daemon read this key and use it to obtain new tickets automatically when needed. The default keytab file resides in /etc/krb5.keytab.
To create a host service principal for jupiter.example.com enter the following
commands during your kadmin session:
kadmin -p newbie/admin
Authenticating as principal newbie/[email protected] with password.
Password for newbie/[email protected]:
kadmin: addprinc -randkey host/jupiter.example.com
WARNING: no policy specified for host/[email protected];
defaulting to no policy
Principal "host/[email protected]" created.
Instead of setting a password for the new principal, the -randkey flag tells kadmin
to generate a random key. This is used here because no user interaction is wanted for
this principal. It is a server account for the machine.
Finally, extract the key and store it in the local keytab file /etc/krb5.keytab.
This file is owned by the superuser, so you must be root to execute the next command
in the kadmin shell:
kadmin: ktadd host/jupiter.example.com
Entry for principal host/jupiter.example.com with kvno 3, encryption type
Triple
DES cbc mode with HMAC/sha1 added to keytab WRFILE:/etc/krb5.keytab.
Entry for principal host/jupiter.example.com with kvno 3, encryption type DES
cbc mode with CRC-32 added to keytab WRFILE:/etc/krb5.keytab.
kadmin:
When completed, make sure that you destroy the admin ticket obtained with kinit above
with kdestroy.
637
and would like the authenticating application to obtain an initial Kerberos ticket on his
behalf. To configure PAM support for Kerberos, use the following command:
pam-config --add --krb5
The above command adds the pam_krb5 module to the existing PAM configuration
files and makes sure it is called in the right order. To make fine adjustments to the way
in which pam_krb5 is used, edit the file /etc/krb5.conf and add default applications to pam. For details, refer to the manual page with man 5 pam_krb5.
The pam_krb5 module was specifically not designed for network services that accept
Kerberos tickets as part of user authentication. This is an entirely different matter, which
is discussed below.
638
Reference
639
A third, and maybe the best solution, is to tell OpenLDAP to use a special keytab file.
To do this, start kadmin, and enter the following command after you have added the
principal ldap/earth.example.com:
ktadd -k /etc/openldap/ldap.keytab ldap/[email protected]
To tell OpenLDAP to use a different keytab file, change the following variable in
/etc/sysconfig/openldap:
OPENLDAP_KRB5_KEYTAB="/etc/openldap/ldap.keytab"
640
Reference
As you can see, ldapsearch prints a message that it started GSSAPI authentication. The
next message is very cryptic, but it shows that the security strength factor (SSF for
short) is 56 (The value 56 is somewhat arbitrary. Most likely it was chosen because
this is the number of bits in a DES encryption key). What this tells you is that GSSAPI
authentication was successful and that encryption is being used to provide integrity
protection and confidentiality for the LDAP connection.
In Kerberos, authentication is always mutual. This means that not only have you authenticated yourself to the LDAP server, but also the LDAP server authenticated itself to
you. In particular, this means communication is with the desired LDAP server, rather
than some bogus service set up by an attacker.
The second statement gives authenticated users write access to the loginShell attribute of their own LDAP entry. The third statement gives all authenticated users read
access to the entire LDAP directory.
There is one minor piece of the puzzle missinghow the LDAP server can find out
that the Kerberos user [email protected] corresponds to the LDAP distinguished
name uid=joe,ou=people,dc=example,dc=com. This sort of mapping must
be configured manually using the saslExpr directive. In this example, add the following to slapd.conf:
authz-regexp
uid=(.*),cn=GSSAPI,cn=auth
uid=$1,ou=people,dc=example,dc=com
641
To understand how this works, you need to know that when SASL authenticates a user,
OpenLDAP forms a distinguished name from the name given to it by SASL (such as
joe) and the name of the SASL flavor (GSSAPI). The result would be
uid=joe,cn=GSSAPI,cn=auth.
If a authz-regexp has been configured, it checks the DN formed from the SASL
information using the first argument as a regular expression. If this regular expression
matches, the name is replaced with the second argument of the authz-regexp
statement. The placeholder $1 is replaced with the substring matched by the (.*)
expression.
More complicated match expressions are possible. If you have a more complicated directory structure or a schema in which the username is not part of the DN, you can even
use search expressions to map the SASL DN to the user DN.
642
Reference
40
Every user has some confidential data that third parties should not be able to access.
The more you rely on mobile computing and on working in different environments and
networks, the more carefully you should handle your data. The encryption of files or
entire partitions is recommended if others have network or physical access to your
system. Laptops or removable media, such as external hard disks or USB sticks, are
prone to being lost or stolen. Thus, it is recommended to encrypt the parts of your file
that hold confidential data.
There are several ways to protect your data by means of encryption:
Encrypting a Hard Disk Partition
You can create an encrypted partition with YaST during installation or in an already
installed system. Refer to Section 40.1.1, Creating an Encrypted Partition during
Installation (page 645) and Section 40.1.2, Creating an Encrypted Partition on a
Running System (page 646) for details. This option can also be used for removable
media, such as external hard disks, as described in Section 40.1.4, Encrypting the
Content of Removable Media (page 647).
Creating an Encrypted File as Container
You can create an encrypted file on your hard disk or on a removable medium with
YaST at any time. The encrypted file can then be used to store other files or folders.
For more information, refer to Section 40.1.3, Creating an Encrypted File as a
Container (page 646).
Encrypting Home Directories
With openSUSE, you can also create encrypted home directories for users. When
the user logs in to the system, the encrypted home directory is mounted and the
643
contents are made available to the user. Refer to Section 40.2, Using Encrypted
Home Directories (page 647) for more information.
Encrypting Single Files
If you only have a small number of files that hold sensitive or confidential data,
you can encrypt them individually and protect them with a password using the vi
editor. Refer to Section 40.3, Using vi to Encrypt Single Files (page 649) for more
information.
WARNING: Encrypted Media Offers Limited Protection
The methods described in this chapter offer only limited protection. You cannot
protect your running system from being compromised. After the encrypted
medium is successfully mounted, everybody with appropriate permissions has
access to it. However, encrypted media are useful in case of loss or theft of
your computer or to prevent unauthorized individuals from reading your confidential data.
644
Reference
645
password when prompted for it. After you are done with working on this partition, unmount it with umount name_of_partition to protect it from access by other
users.
When you are installing your system on a machine where several partitions already
exist, you can also decide to encrypt an existing partition during installation. In this
case follow the description in Section 40.1.2, Creating an Encrypted Partition on a
Running System (page 646) and be aware that this action destroys all data on the existing
partition to encrypt.
646
Reference
The advantage of encrypted container files over encrypted partitions is that they can be
added without repartitioning the hard disk. They are mounted with the help of a loop
device and behave just like normal partitions.
647
This creates a home directory with the initial size of 200 MB.
4 To change the size of the home directory at any time, use
cryptconfig enlarge-size image
size_to_add_in_MB
For more information about the command line tool, run the cryptconfig --help
command.
Internally, the home directory is provided by means of the pam module pam_mount.
If you need to add an additional login method that provides encrypted home directories,
you may have to add this module to the respective configuration file in /etc/pam.d/
648
Reference
. For more information see also Chapter 18, Authentication with PAM (page 263) and
the man page of pam_mount.
649
41
Many security vulnerabilities result from bugs in trusted programs. A trusted program
runs with privilege that some attacker would like to have. The program fails to keep
that trust if there is a bug in the program that allows the attacker to acquire that privilege.
Novell AppArmor is an application security solution designed specifically to provide
least privilege confinement to suspect programs. AppArmor allows the administrator
to specify the domain of activities the program can perform by developing a security
profile for that applicationa listing of files that the program may access and the operations the program may perform.
Effective hardening of a computer system requires minimizing the number of programs
that mediate privilege then securing the programs as much as possible. With Novell
AppArmor, you only need to profile the programs that are exposed to attack in your
environment, which drastically reduces the amount of work required to harden your
computer. AppArmor profiles enforce policies to make sure that programs do what they
are supposed to do, but nothing else.
Administrators only need to care about the applications that are vulnerable to attacks
and generate profiles for these. Hardening a system thus comes down to building and
maintaining the AppArmor profile set and monitoring any policy violations or exceptions
logged by AppArmor's reporting facility.
Building AppArmor profiles to confine an application is very straightforward and intuitive. AppArmor ships with several tools that assist in profile creation. It does not require
you to do any programming or script handling. The only task that is required from the
administrator is to determine a policy of strictest access and execute permissions for
each application that needs to be hardened.
651
Updates or modifications to the application profiles are only required if the software
configuration or the desired range of activities changes. AppArmor offers intuitive tools
to handle profile updates or modifications.
Users should not notice AppArmor at all. It runs behind the scenes and does not require
any user interaction. Performance is not affected noticeably by AppArmor. If some
activity of the application is not covered by an AppArmor profile or if some activity
of the application is prevented by AppArmor, the administrator needs to adjust the
profile of this application to cover this kind of behavior.
This guide outlines the basic tasks that need to be performed with AppArmor to effectively harden a system. For more in-depth information, refer to Novell AppArmor Administration Guide.
apparmor-parser
libapparmor
apparmor-docs
yast2-apparmor
apparmor-profiles
apparmor-utils
audit
652
Reference
653
654
Reference
Cron Jobs
Programs that the cron daemon periodically run read input from a variety of sources.
To find out which processes are currently running with open network ports and might
need a profile to confine them, run aa-unconfined as root.
Example 41.1 Output of aa-unconfined
19848
19887
19947
29205
Each of the processes in the above example labeled not confined might need a
custom profile to confine it. Those labeled confined by are already protected by
AppArmor.
TIP: For More Information
For more information about choosing the the right applications to profile, refer
to Section Determining Programs to Immunize (Chapter 1, Immunizing Programs, Novell AppArmor Administration Guide).
655
or
Outline the basic profile by running YaST > Novell AppArmor > Add Profile
Wizard and specifying the complete path of the application to profile.
A basic profile is outlined and AppArmor is put into learning mode, which means
that it logs any activity of the program you are executing but does not yet restrict
it.
2 Run the full range of the application's actions to let AppArmor get a very specific
picture of its activities.
3 Let AppArmor analyze the log files generated in Step 2 (page 656) by running
typing S in aa-genprof.
or
Analyze the logs by clicking Scan system log for AppArmor events in the Add
Profile Wizard and following the instructions given in the wizard until the profile
is completed.
AppArmor scans the logs it recorded during the application's run and asks you
to set the access rights for each event that was logged. Either set them for each
file or use globbing.
4 Depending on the complexity of your application, it might be necessary to repeat
Step 2 (page 656) and Step 3 (page 656). Confine the application, exercise it under
the confined conditions, and process any new log events. To properly confine
the full range of an application's capabilities, you might be required to repeat this
procedure often.
5 Once all access permissions are set, your profile is set to enforce mode. The
profile is applied and AppArmor restricts the application according to the profile
just created.
If you started aa-genprof on an application that had an existing profile that was
in complain mode, this profile remains in learning mode upon exit of this learning
cycle. For more information about changing the mode of a profile, refer to Section
aa-complainEntering Complain or Learning Mode (Chapter 4, Building
Profiles from the Command Line, Novell AppArmor Administration Guide)
656
Reference
/var/log/messages
If auditd is not used, AppArmor events are logged in the standard system log under
/var/log/messages. An example entry would look like the following:
Feb 22 18:29:14 dhcp-81 klogd: audit(1140661749.146:3): REJECTING w access
to /dev/console (mdnsd(3239) profile /usr/sbin/mdnsd active
/usr/sbin/mdnsd)
dmesg
If auditd is not running, AppArmor events can also be checked using the dmesg
command:
audit(1140661749.146:3): REJECTING w access to /dev/console (mdnsd(3239)
profile /usr/sbin/mdnsd active /usr/sbin/mdnsd)
To adjust the profile, analyze the log messages relating to this application again as described in Step 3 (page 656). Determine the access rights or restrictions when prompted.
TIP: For More Information
For more information about profile building and modification, refer to Chapter 2, Profile Components and Syntax (Novell AppArmor Administration Guide),
Chapter 3, Building and Managing Profiles with YaST (Novell AppArmor Administration Guide), and Chapter 4, Building Profiles from the Command Line
(Novell AppArmor Administration Guide).
657
658
Reference
2 Select the type of report to examine or configure from Executive Security Summary, Applications Audit, and Security Incident Report.
3 Edit the report generation frequency, e-mail address, export format, and location
of the reports by selecting Edit and providing the requested data.
4 To run a report of the selected type, click Run Now.
5 Browse through the archived reports of a given type by selecting View Archive
and specifying the report type.
or
Delete unneeded reports or add new ones.
TIP: For More Information
For more information about configuring event notification in Novell AppArmor,
refer to Section Configuring Security Event Notification (Chapter 6, Managing
Profiled Applications, Novell AppArmor Administration Guide). Find more information about report configuration in Section Configuring Reports (Chapter 6, Managing Profiled Applications, Novell AppArmor Administration Guide).
659
4 Leave YaST after you answer all questions. Your changes are applied to the respective profiles.
TIP: For More Information
For more information about updating your profiles from the system logs, refer
to Section Updating Profiles from Log Entries (Chapter 3, Building and Managing Profiles with YaST, Novell AppArmor Administration Guide).
660
Reference
42
One of the main characteristics of a Linux or UNIX system is its ability to handle several users at the same time (multiuser) and to allow these users to perform several tasks
(multitasking) on the same computer simultaneously. Moreover, the operating system
is network transparent. The users often do not know whether the data and applications
they are using are provided locally from their machine or made available over the network.
With the multiuser capability, the data of different users must be stored separately. Security and privacy need to be guaranteed. Data security was already an important issue,
even before computers could be linked through networks. Just like today, the most important concern was the ability to keep data available in spite of a lost or otherwise
damaged data medium, a hard disk in most cases.
This section is primarily focused on confidentiality issues and on ways to protect the
privacy of users, but it cannot be stressed enough that a comprehensive security concept
should always include procedures to have a regularly updated, workable, and tested
backup in place. Without this, you could have a very hard time getting your data
backnot only in the case of some hardware defect, but also if the suspicion arises that
someone has gained unauthorized access and tampered with files.
661
662
Reference
Serial terminals connected to serial ports are still used in many places. Unlike network
interfaces, they do not rely on a network protocol to communicate with the host. A
simple cable or an infrared port is used to send plain characters back and forth between
the devices. The cable itself is the weakest point of such a system: with an older printer
connected to it, it is easy to record anything that runs over the wires. What can be
achieved with a printer can also be accomplished in other ways, depending on the effort
that goes into the attack.
Reading a file locally on a host requires other access rules than opening a network
connection with a server on a different host. There is a distinction between local security and network security. The line is drawn where data must be put into packets to be
sent somewhere else.
42.1.2 Passwords
On a Linux system, passwords are not stored as plain text and the text string entered is
not simply matched with the saved pattern. If this were the case, all accounts on your
system would be compromised as soon as someone got access to the corresponding
file. Instead, the stored password is encrypted and, each time it is entered, is encrypted
again and the two encrypted strings are compared. This only provides more security if
the encrypted password cannot be reverse-computed into the original text string.
This is actually achieved by a special kind of algorithm, also called trapdoor algorithm,
because it only works in one direction. An attacker who has obtained the encrypted
string is not able to get your password by simply applying the same algorithm again.
Instead, it would be necessary to test all the possible character combinations until a
combination is found that looks like your password when encrypted. With passwords
eight characters long, there are quite a number of possible combinations to calculate.
663
In the seventies, it was argued that this method would be more secure than others due
to the relative slowness of the algorithm used, which took a few seconds to encrypt just
one password. In the meantime, however, PCs have become powerful enough to do
several hundred thousand or even millions of encryptions per second. Because of this,
encrypted passwords should not be visible to regular users (/etc/shadow cannot be
read by normal users). It is even more important that passwords are not easy to guess,
in case the password file becomes visible due to some error. Consequently, it is not really useful to translate a password like tantalize into t@nt@1lz3.
Replacing some letters of a word with similar looking numbers is not safe enough.
Password cracking programs that use dictionaries to guess words also play with substitutions like that. A better way is to make up a word with no common meaning, something
that only makes sense to you personally, like the first letters of the words of a sentence
or the title of a book, such as The Name of the Rose by Umberto Eco. This would
give the following safe password: TNotRbUE9. In contrast, passwords like beerbuddy or jasmine76 are easily guessed even by someone who has only some casual
knowledge about you.
664
Reference
The permissions of all files included in the openSUSE distribution are carefully chosen.
A system administrator who installs additional software or other files should take great
care when doing so, especially when setting the permission bits. Experienced and security-conscious system administrators always use the -l option with the command ls
to get an extensive file list, which allows them to detect any incorrect file permissions
immediately. An incorrect file attribute does not only mean that files could be changed
or deleted. These modified files could be executed by root or, in the case of configuration files, programs could use such files with the permissions of root. This significantly increases the possibilities of an attacker. Attacks like this are called cuckoo eggs,
because the program (the egg) is executed (hatched) by a different user (bird), just like
a cuckoo tricks other birds into hatching its eggs.
A openSUSE system includes the files permissions, permissions.easy,
permissions.secure, and permissions.paranoid, all in the directory
/etc. The purpose of these files is to define special permissions, such as world-writable
directories or, for files, the setuser ID bit (programs with the setuser ID bit set do not
run with the permissions of the user that has launched it, but with the permissions of
the file owner, in most cases root). An administrator can use the file /etc/
permissions.local to add his own settings.
To define which of the above files is used by openSUSE's configuration programs to
set permissions accordingly, select Local Security in the Security and Users section of
YaST. To learn more about the topic, read the comments in /etc/permissions or
consult the manual page of chmod (man chmod).
665
is written beyond the end of that buffer area, which, under certain circumstances, makes
it possible for a program to execute program sequences influenced by the user (and not
by the programmer), rather than just processing user data. A bug of this kind may have
serious consequences, especially if the program is being executed with special privileges
(see Section 42.1.4, File Permissions (page 664)).
Format string bugs work in a slightly different way, but again it is the user input that
could lead the program astray. In most cases, these programming errors are exploited
with programs executed with special permissionssetuid and setgid programswhich
also means that you can protect your data and your system from such bugs by removing
the corresponding execution privileges from programs. Again, the best way is to apply
a policy of using the lowest possible privileges (see Section 42.1.4, File Permissions
(page 664)).
Given that buffer overflows and format string bugs are bugs related to the handling of
user data, they are not only exploitable if access has been given to a local account.
Many of the bugs that have been reported can also be exploited over a network link.
Accordingly, buffer overflows and format string bugs should be classified as being
relevant for both local and network security.
42.1.6 Viruses
Contrary to what some people say, there are viruses that run on Linux. However, the
viruses that are known were released by their authors as a proof of concept to prove
that the technique works as intended. None of these viruses have been spotted in the
wild so far.
Viruses cannot survive and spread without a host on which to live. In this case, the host
would be a program or an important storage area of the system, such as the master boot
record, which needs to be writable for the program code of the virus. Owing to its
multiuser capability, Linux can restrict write access to certain files, especially important
with system files. Therefore, if you did your normal work with root permissions, you
would increase the chance of the system being infected by a virus. In contrast, if you
follow the principle of using the lowest possible privileges as mentioned above, chances
of getting a virus are slim.
Apart from that, you should never rush into executing a program from some Internet
site that you do not really know. openSUSE's RPM packages carry a cryptographic
signature as a digital label that the necessary care was taken to build them. Viruses are
666
Reference
a typical sign that the administrator or the user lacks the required security awareness,
putting at risk even a system that should be highly secure by its very design.
Viruses should not be confused with worms, which belong to the world of networks
entirely. Worms do not need a host to spread.
667
In the case of cookie-based access control, a character string is generated that is only
known to the X server and to the legitimate user, just like an ID card of some kind. This
cookie (the word goes back not to ordinary cookies, but to Chinese fortune cookies,
which contain an epigram) is stored on login in the file .Xauthority in the user's
home directory and is available to any X client wanting to use the X server to display
a window. The file .Xauthority can be examined by the user with the tool xauth.
If you were to rename .Xauthority or if you deleted the file from your home directory by accident, you would not be able to open any new windows or X clients.
SSH (secure shell) can be used to encrypt a network connection completely and forward
it to an X server transparently without the encryption mechanism being perceived by
the user. This is also called X forwarding. X forwarding is achieved by simulating an
X server on the server side and setting a DISPLAY variable for the shell on the remote
host. Further details about SSH can be found in Chapter 36, SSHSecure Network
Operations (page 589).
WARNING
If you do not consider the host where you log in to be a secure host, do not
use X forwarding. With X forwarding enabled, an attacker could authenticate
via your SSH connection to intrude on your X server and sniff your keyboard
input, for instance.
668
Reference
Over the years, experience has shown that the availability of exploit codes has contributed to more secure operating systems, obviously due to the fact that operating system
makers were forced to fix the problems in their software. With free software, anyone
has access to the source code (openSUSE comes with all available source codes) and
anyone who finds a vulnerability and its exploit code can submit a patch to fix the
corresponding bug.
669
Spoofing is an attack where packets are modified to contain counterfeit source data,
usually the IP address. Most active forms of attack rely on sending out such fake
packetssomething that, on a Linux machine, can only be done by the superuser
(root).
Many of the attacks mentioned are carried out in combination with a DoS. If an attacker
sees an opportunity to bring down a certain host abruptly, even if only for a short time,
it makes it easier for him to push the active attack, because the host will not be able to
interfere with the attack for some time.
42.1.13 Worms
Worms are often confused with viruses, but there is a clear difference between the two.
Unlike viruses, worms do not need to infect a host program to live. Instead, they are
specialized to spread as quickly as possible on network structures. The worms that appeared in the past, such as Ramen, Lion, or Adore, make use of well-known security
holes in server programs like bind8 or lprNG. Protection against worms is relatively
easy. Given that some time elapses between the discovery of a security hole and the
moment the worm hits your server, there is a good chance that an updated version of
the affected program is available on time. That is only useful if the administrator actually installs the security updates on the systems in question.
670
Reference
671
Reference
Check your backups of user and system files regularly. Consider that if you do not
test whether the backup works, it might actually be worthless.
Check your log files. Whenever possible, write a small script to search for suspicious
entries. Admittedly, this is not exactly a trivial task. In the end, only you can know
which entries are unusual and which are not.
Use tcp_wrapper to restrict access to the individual services running on your
machine, so you have explicit control over which IP addresses can connect to a
service. For further information regarding tcp_wrapper, consult the manual
pages of tcpd and hosts_access (man 8 tcpd, man hosts_access).
Use SuSEfirewall to enhance the security provided by tcpd (tcp_wrapper).
Design your security measures to be redundant: a message seen twice is much
better than no message at all.
673
An Example Network
This example network is used across all network-related chapters of the openSUSE
documentation.
GNU Licenses
This appendix contains the GNU General Public License and the GNU Free Documentation License.
GNU General Public License
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc. 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA
Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
Preamble
The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended
to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to
most of the Free Software Foundations software and to any other program whose authors commit to using it. (Some other Free Software Foundation
software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too.
When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom
to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change
the software or use pieces of it in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions
translate to certain responsibilities for you if you distribute copies of the software, or if you modify it.
For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must
make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.
We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute
and/or modify the software.
Also, for each authors protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the
software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced
by others will not reflect on the original authors reputations.
Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyones
free use or not licensed at all.
The precise terms and conditions for copying, distribution and modification follow.
GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0.
This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the
terms of this General Public License. The Program, below, refers to any such program or work, and a work based on the Program means either the
Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications
and/or translated into another language. (Hereinafter, translation is included without limitation in the term modification.) Each licensee is addressed
as you.
Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program
is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been
made by running the Program). Whether that is true depends on what the Program does.
1.
You may copy and distribute verbatim copies of the Programs source code as you receive it, in any medium, provided that you conspicuously
and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License
and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program.
You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such
modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions:
a)
You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to
be licensed as a whole at no charge to all third parties under the terms of this License.
c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the
most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying
that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License.
(Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to
print an announcement.)
These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably
considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as
separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must
be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who
wrote it.
Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to
control the distribution of derivative or collective works based on the Program.
In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a
storage or distribution medium does not bring the other work under the scope of this License.
3.
You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections
1 and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2
above on a medium customarily used for software interchange; or,
b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing
source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above
on a medium customarily used for software interchange; or,
c)
Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for
noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b
above.)
The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means
all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation
of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source
or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component
itself accompanies the executable.
If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the
source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with
the object code.
4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy,
modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received
copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.
678
Reference
5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the
Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the
Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor
to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients exercise
of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License.
7.
If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are
imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions
of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as
a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program
by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the
section as a whole is intended to apply in other circumstances.
It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this
section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many
people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that
choice.
This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright
holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution
is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and any later
version, you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation.
10.
If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask
for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions
for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing
and reuse of software generally.
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT
PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER
PARTIES PROVIDE THE PROGRAM AS IS WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING,
BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE
ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE,
YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
12.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR
ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR
DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR
INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE
OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
GNU Licenses
679
This program is
modify it under
as published by
of the License,
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this when it starts in an interactive mode:
The hypothetical commands `show w and `show c should show the appropriate parts of the General Public License. Of course, the commands you use
may be called something other than `show w and `show c; they could even be mouse-clicks or menu items--whatever suits your program.
You should also get your employer (if you work as a programmer) or your school, if any, to sign a copyright disclaimer for the program, if necessary.
Here is a sample; alter the names:
This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public
License [http://www.fsf.org/licenses/lgpl.html] instead of this License.
PREAMBLE
The purpose of this License is to make a manual, textbook, or other functional and useful document free in the sense of freedom: to assure everyone
the effective freedom to copy and redistribute it, with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves
for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others.
This License is a kind of copyleft, which means that derivative works of the document must themselves be free in the same sense. It complements
the GNU General Public License, which is a copyleft license designed for free software.
680
Reference
We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should
come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any
textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose
is instruction or reference.
VERBATIM COPYING
You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices,
and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to
those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute.
However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions
in section 3.
You may also lend copies, under the same conditions stated above, and you may publicly display copies.
COPYING IN QUANTITY
If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Documents
license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on
the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The
front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition.
Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim
copying in other respects.
GNU Licenses
681
If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover,
and continue the rest onto adjacent pages.
If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy
along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access
to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option,
you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus
accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of
that edition to the public.
It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a
chance to provide you with an updated version of the Document.
MODIFICATIONS
You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified
Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the
Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:
A. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if
there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version
gives permission.
B. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together
with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement.
C.
State on the Title page the name of the publisher of the Modified Version, as the publisher.
D.
E.
Add an appropriate copyright notice for your modifications adjacent to the other copyright notices.
F. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this
License, in the form shown in the Addendum below.
G.
Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Documents license notice.
H.
I. Preserve the section Entitled History, Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the
Modified Version as given on the Title Page. If there is no section Entitled History in the Document, create one stating the title, year, authors, and
publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence.
J.
Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network
locations given in the Document for previous versions it was based on. These may be placed in the History section. You may omit a network location
for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission.
K. For any section Entitled Acknowledgements or Dedications, Preserve the Title of the section, and preserve in the section all the substance
and tone of each of the contributor acknowledgements and/or dedications given therein.
L. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered
part of the section titles.
M.
Delete any section Entitled Endorsements. Such a section may not be included in the Modified Version.
N.
Do not retitle any existing section to be Entitled Endorsements or to conflict in title with any Invariant Section.
O.
If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the
Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the
Modified Versions license notice. These titles must be distinct from any other section titles.
You may add a section Entitled Endorsements, provided it contains nothing but endorsements of your Modified Version by various parties--for example,
statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of
Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements
made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the
same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher
that added the old one.
682
Reference
The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement
of any Modified Version.
COMBINING DOCUMENTS
You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions,
provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant
Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers.
The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there
are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in
parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles
in the list of Invariant Sections in the license notice of the combined work.
In the combination, you must combine any sections Entitled History in the various original documents, forming one section Entitled History; likewise
combine any sections Entitled Acknowledgements, and any sections Entitled Dedications. You must delete all sections Entitled Endorsements.
COLLECTIONS OF DOCUMENTS
You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License
in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying
of each of the documents in all other respects.
You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License
into the extracted document, and follow this License in all other respects regarding verbatim copying of that document.
TRANSLATION
Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant
Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections
in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document,
and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and
disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will
prevail.
If a section in the Document is Entitled Acknowledgements, Dedications, or History, the requirement (section 4) to Preserve its Title (section 1)
will typically require changing the actual title.
TERMINATION
You may not copy, modify, sublicense, or distribute the Document except as expressly provided for under this License. Any other attempt to copy,
modify, sublicense or distribute the Document is void, and will automatically terminate your rights under this License. However, parties who have received
copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.
GNU Licenses
683
If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the with...Texts. line with this:
with the Invariant Sections being LIST THEIR TITLES, with the
Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST.
If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation.
If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software
license, such as the GNU General Public License, to permit their use in free software.
684
Reference
Index
Symbols
64-bit Linux, 175
kernel specifications, 178
runtime support, 175
software development, 176
A
access permissions (see permissions)
ACLs, 251-262
access, 253, 256
check algorithm, 261
default, 254, 259
definitions, 253
effects, 259
handling, 254
masks, 258
permission bits, 255
structure, 254
support, 262
add-on medium
language support, 84
Apache, 481-520
CGI scripts, 507
configuring, 483
files, 483
manually, 483-490
virtual host, 486
YaST, 490-497
installing, 482
modules, 499-506
available, 500
building, 506
external, 504
installing, 500
multiprocessing, 503
quick start, 481
security, 515
SSL, 509-515
configure Apache with SSL, 514
creating an SSL certificate, 510
starting, 497
stopping, 497
troubleshooting, 517
authentication
PAM, 263-270
B
Bash
.bashrc, 216
.profile, 216
profile, 215
BIND, 361-372
Bluetooth, 553
hciconfig, 557
hcitool, 556
pand, 558
sdptool, 557
booting, 179
boot sectors, 195, 196
configuring
YaST, 205
graphic, 210
GRUB, 195, 196
initramfs, 181
initrd, 181
C
cards
graphics, 122
network, 315, 316
cat, 286
cd, 282
chgrp, 280, 283
chmod, 279, 283
chown, 280, 282
CJK, 223
clear, 291
commands, 280-291
cat, 286
cd, 282
chgrp, 280, 283
chmod, 279, 283
chown, 280, 282
clear, 291
cp, 281
date, 289
df, 288
diff, 287
du, 288
file, 286
find, 285
fonts-config, 124
free, 220, 288
getfacl, 257
grep, 286
grub, 196
gzip, 284
halt, 291
ifconfig, 343
ip, 340
kadmin, 627
kill, 289
killall, 289
kinit, 634
ktadd, 637
ldapadd, 436
ldapdelete, 439
ldapmodify, 437
ldapsearch, 438, 640
less, 286
ln, 282
locate, 285
lp, 104
ls, 281
man, 280
mkdir, 282
mount, 287
mv, 282
nslookup, 290
passwd, 291
ping, 290, 341
ps, 289
reboot, 291
rm, 282
rmdir, 282
route, 344
rpm, 85
rpmbuild, 85
scp, 590
setfacl, 257
sftp, 591
slptool, 350
smbpasswd, 479
ssh, 590
ssh-agent, 593
ssh-keygen, 592
su, 291
tar, 284
telnet, 290
top, 289
umount, 287
updatedb, 285
configuration files, 333
.bashrc, 216, 219
.emacs, 221
.profile, 216
.xsession, 593
acpi, 529
crontab, 216
csh.cshrc, 225
dhclient.conf, 387
dhcp, 334
dhcpd.conf, 388
fstab, 46, 287
group, 76
grub.conf, 203
host.conf, 336
HOSTNAME, 340
hosts, 315, 336
ifcfg-*, 333
inittab, 183, 185, 222
inputrc, 222
irda, 561
kernel, 181
krb5.conf, 631, 632, 634, 638
krb5.keytab, 637
language, 223, 225
logrotate.conf, 217
menu.lst, 198
named.conf, 362, 363-372
network, 334
networks, 336
nscd.conf, 339
nsswitch.conf, 337
openldap, 640
passwd, 76
permissions, 672
powersave, 529
profile, 215, 219, 225
resolv.conf, 220, 335, 362
routes, 334
samba, 473
services, 473
slapd.conf, 429, 641
smb.conf, 473, 479
smppd.conf, 346
smpppd-c.conf, 347
sshd_config, 594, 638
ssh_config, 639
suseconfig, 194
sysconfig, 192-194
termcap, 222
wireless, 334
xorg.conf, 117
Device, 121
Monitor, 122
Screen, 120
configuring, 192
cable modem, 329
DNS, 353
DSL, 329
FTP server, 521
GRUB, 196, 203
IPv6, 313
IrDA, 561
ISDN, 326
modems, 324
networks, 316
manually, 332-345
routing, 334
Samba, 471-477
clients, 477
SSH, 589
T-DSL, 331
consoles
assigning, 222
graphical, 210
switching, 222
core files, 219
cp, 281
cpuspeed, 537
cron, 216
D
date, 289
deltarpm, 89
df, 288
DHCP, 375-391
configuring with YaST, 376
dhcpd, 388-389
packages, 387
server, 388-389
static address assignment, 389
diff, 287
directories
/, 273
/bin, 273, 274
/boot, 273, 274
/dev, 273, 274
/etc, 273, 274
/home, 273, 274
/lib, 273, 275
/media, 273, 275
/mnt, 273, 275
/opt, 273, 275
/root, 273, 275
/sbin, 273, 275
/srv, 273, 275
/tmp, 274, 275
/usr, 274, 276
/var, 274, 276
/windows, 274, 277
changing, 282
creating, 282
deleting, 282
structure, 273
disks
boot, 209
DNS, 314
BIND, 361-372
configuring, 353
domains, 335
forwarding, 362
logging, 366
mail exchanger, 315
name servers, 335
NIC, 315
options, 364
reverse lookup, 371
security and, 670
starting, 362
terminology, 353
top level domain, 314
troubleshooting, 362
zones
files, 367
domain name system (see DNS)
DOS
sharing files, 469
drives
mounting, 287
unmounting, 287
du, 288
E
editors
Emacs, 221-222
vi, 291
Emacs, 221-222
.emacs, 221
default.el, 221
encoding
ISO-8859-1, 225
encrypting, 643-647
creating partitions, 645
files, 646-649
files with vi, 649
partitions, 644-646
removable media, 647
YaST, with, 644
error messages
bad interpreter, 46
permission denied, 46
F
file, 286
file systems, 241-250
ACLs, 251-262
changing, 44
cryptofs, 643
encrypting, 643
Ext2, 243-244
Ext3, 244-245
LFS, 248
limitations, 248
ReiserFS, 242-243
selecting, 242
supported, 247-248
terms, 241
XFS, 246
files
archiving, 284
comparing, 287
compressing, 284
copying, 281
deleting, 282
encrypting, 646
finding, 218
moving, 282
searching contents, 286
searching for, 285
viewing, 286
find, 285
Firefox
URL open command, 81
firewalls, 577
packet filters, 577, 581
SuSEfirewall2, 577, 582
fonts, 124
TrueType, 123
X11 core, 124
Xft, 125
free, 288
FTP server
configuring, 521
G
GNOME
shell, 272
graphics
cards
drivers, 122
grep, 286
GRUB, 195
boot menu, 198
boot password, 204
boot sectors, 196
booting, 196
commands, 196
device names, 199
device.map, 197, 202
GRUB Geom Error, 211
grub.conf, 197, 203
limitations, 196
Master Boot Record (MBR), 195
menu editor, 201
menu.lst, 197, 198
partition names, 199
troubleshooting, 211
uninstalling, 209
gzip, 284
H
halt, 291
hardware
ISDN, 326
hciconfig, 557
hcitool, 556
help
info pages, 220
man pages, 220, 280
Novell/SUSE manuals, xiv
X, 123
I
I18N, 223
info pages, 220
init, 183
adding scripts, 188
inittab, 183
scripts, 186-190
installing
GRUB, 196
packages, 86
internationalization, 223
Internet
cinternet, 347
dial-up, 345-347
DSL, 329
ISDN, 326
KInternet, 347
qinternet, 347
smpppd, 345-347
TDSL, 331
IP addresses, 302
classes, 303
dynamic assignment, 375
IPv6, 305
configuring, 313
masquerading, 580
private, 305
IrDA, 560-563
configuring, 561
starting, 561
stopping, 561
troubleshooting, 562
K
KDE
shell, 272
Kerberos, 613-619
administering, 621-642
authenticators, 614
clients
configuring, 631-633
clock skew, 633
clock synchronization, 625
configuring
clients, 631-633
credentials, 614
installing, 621-642
KDC, 624-625, 626-628
administering, 634
nsswitch.conf, 625
starting, 628
keytab, 637
LDAP and, 639-642
master key, 627
PAM support, 637-638
principals, 614
creating, 628
host, 636
service, 636
realms, 623
creating, 627
session key, 614
SSH configuration, 638
stash file, 627
ticket-granting service, 617
tickets, 614, 617
kernel
standard kernel, 84
kernels
caches, 220
limits, 249
keyboard
Asian characters, 223
layout, 222
mapping, 222
compose, 222
multikey, 222
X Keyboard Extension, 223
XKB, 223
kill, 289
killall, 289
L
L10N, 223
laptops
IrDA, 560-563
power management, 527-537
LDAP, 409-440
access control, 433
ACLs, 431
adding data, 435
administering groups, 426
administering users, 426
configuring
YaST, 414
deleting data, 439
directory tree, 411
Kerberos and, 639-642
ldapadd, 435
ldapdelete, 439
ldapmodify, 437
ldapsearch, 438
modifying data, 437
searching data, 438
server configuration
manual, 429
YaST, 414
YaST
client, 419
modules, 419
templates, 419
less, 286
LFS, 248
Lightweight Directory Access Protocol
(see LDAP)
Linux
networks and, 299
sharing files with another OS, 469
uninstalling, 209
ln, 282
localization, 223
locate, 218, 285
log files, 217
boot.msg, 529
messages, 362, 586
M
man pages, 220, 280
masquerading, 580
configuring with SuSEfirewall2, 582
Master Boot Record (see MBR)
MBR, 195, 196
memory
RAM, 220
mkdir, 282
modems
cable, 329
YaST, 324
mount, 287
mv, 282
N
name servers (see DNS)
NAT (see masquerading)
NetBIOS, 469
Network File System (see NFS)
Network Information Service (see NIS)
networks, 299
authentication
Kerberos, 613-619
base network address, 304
broadcast address, 304
configuration files, 333-340
configuring, 315-331, 332-345
IPv6, 313
DHCP, 375
DNS, 314
localhost, 305
netmasks, 303
routing, 302, 303
SLP, 349
TCP/IP, 299
YaST, 316
alias, 319
gateway, 320
hostname, 319
IP address, 318
starting, 321
NFS, 455
clients, 456
exporting, 463
importing, 457
mounting, 457
servers, 459
NIS, 401-408
clients, 407
masters, 401-407
slaves, 401-407
Novell/SUSE manuals, xiv
nslookup, 290
NSS, 337
databases, 338
O
OpenLDAP (see LDAP)
OpenSSH (see SSH)
OS/2
sharing files, 469
P
packages
compiling, 93
compiling with build, 95
installing, 86
LSB, 85
package manager, 85
RPMs, 85
uninstalling, 86
verifying, 86
packet filters (see firewalls)
PAM, 263-270
pand, 558
partitions
creating, 41, 43
encrypting, 645
fstab, 46
LVM, 44
parameters, 44
partition table, 195
RAID, 44
reformatting, 44
swap, 44
types, 42
passwd, 291
passwords
changing, 291
PCMCIA
IrDA, 560-563
permissions, 277
ACLs, 251-262
changing, 279, 283
directories, 278
file permissions, 218
file systems, 277
files, 277
viewing, 279
ping, 290, 341
Pluggable Authentication Modules (see
PAM)
ports
53, 364
PostgreSQL
updating, 76
power management, 527-542
ACPI, 527, 528-535
battery monitor, 528
cpufrequency, 537
cpuspeed, 537
hibernation, 528
powersave, 537
standby, 527
suspend, 527
powersave, 537
configuring, 537
printing, 97
command line, 104
CUPS, 103
GDI printers, 108
IrDA, 561
kprinter, 103
network, 110
Samba, 470
troubleshooting
network, 110
xpp, 103
private branch exchange, 328
processes, 289
killing, 289
overview, 289
protocols
CIFS, 469
IPv6, 305
LDAP, 409
SLP, 349
SMB, 469
ps, 289
R
RAID
YaST, 55
reboot, 291
RFCs, 299
rm, 282
rmdir, 282
routing, 302, 334
masquerading, 580
netmasks, 303
routes, 334
static, 334
RPM, 85-96
database
rebuilding, 87, 93
deltarpm, 89
dependencies, 86
patches, 87
queries, 90
rpmnew, 86
rpmorig, 86
rpmsave, 86
security, 672
SRPMS, 93
tools, 96
uninstalling, 87
updating, 86
verify, 92
verifying, 86
rpmbuild, 85
runlevels, 183-186
changing, 185-186
editing in YaST, 190
S
Samba, 469-480
CIFS, 469
clients, 470, 477-478
configuring, 471-477
installing, 471
login, 478
names, 469
permissions, 477
printers, 470
printing, 478
security, 477
server, 470
servers, 471-477
shares, 470, 475
SMB, 469
starting, 471
stopping, 471
swat, 473
TCP/IP and, 469
screen
resolution, 121
scripts
init.d, 183, 186-190, 344
boot, 187
boot.local, 188
boot.setup, 188
halt, 188
network, 344
nfsserver, 345
portmap, 345
postfix, 345
rc, 185, 186, 188
xinetd, 345
ypbind, 345
ypserv, 345
irda, 561
mkinitrd, 181
modify_resolvconf, 220, 335
SuSEconfig, 192-194
disabling, 194
sdptool, 557
security, 661-673
attacks, 669-670
booting, 662, 664
bugs and, 665, 668
DNS, 670
engineering, 662
firewalls, 577
local, 663-667
network, 667-670
passwords, 663-664
permissions, 664-665
system
limiting resource use, 219
localizing, 223
rebooting, 291
shutdown, 291
updating, 75-78
T
Tablet PCs, 565-573
configuring, 567
Dasher, 570
installing, 566
Jarnal, 569
KRandRTray, 568
Xournal, 569
xstroke, 568
xvkbd, 567
tar, 284
TCP/IP, 299
ICMP, 300
IGMP, 300
layer model, 300
packets, 301, 302
TCP, 300
UDP, 300
TEI XSL stylesheets
new location, 80
telnet, 290
top, 289
U
ulimit, 219
options, 219
umount, 287
uninstalling
GRUB, 209
Linux, 209
updatedb, 285
updating, 75-78
online, 63
command line, 66
passwd and group, 76
problems, 76
YaST, 76
users
/etc/passwd, 266
V
variables
environment, 223
virtual memory, 44
W
whois, 315
wild cards, 285
Windows
sharing files, 469
wireless connections
Bluetooth, 553
X
X
character sets, 123
configuring, 117-123
drivers, 122
font systems, 124
fonts, 123
help, 123
SaX2, 118
security, 667
SSH and, 593
TrueType fonts, 123
virtual screen, 121
X11 core fonts, 124
xft, 123
Xft, 125
xorg.config, 118
Y
YaST
boot configuration, 205
default system, 208
security, 208
time-out, 208
boot loader
location, 207
password, 208
type, 206
CA management, 602
cable modem, 329
command line, 72
DHCP, 376
DSL, 329
GRUB, 206
ISDN, 326
LDAP
clients, 419
servers, 414
LILO, 206
LVM, 49
modems, 324
ncurses, 69
network card, 316
NIS clients, 407
online update, 63-65
partitioning, 41
RAID, 55
runlevels, 190
Samba
clients, 477
sysconfig editor, 192
T-DSL, 331
text mode, 69-73
modules, 72
updating, 76
X.509 certification, 597
certificates, 605
changing default values, 607
creating CRLs, 609
exporting CA objects as a file, 611
exporting CA objects to LDAP, 609
importing general server certificates,
611
root CA, 602
sub-CA, 604
YP (see NIS)
Z
zypper, 66-68