The C Compiler:
In UNIX and Linux operating systems, the compiler that is used to compile a C file is known as “gcc”. Here “gcc” stands for GNU C Compiler.
Compiling program with gcc: ~
You can use your favorite editor to write a C program. Suppose you have write down a simple program and save it as test1.c.Here is the code
#include
void main(void)
{
printf(“hello world”);
}
To compile the program, we type:
$ gcc test1.c
When “gcc” runs with the above usage, it generates a file named “a.out” that contains your program. You can run the program as follow:
$ ./a.out
hello world
In this way we can carry out the compilation and execution of a C program.
The C compiler “gcc” options:
The GNU C compiler provides us lots of options that help you to manage your project. Some of the options are given below:
The “–o” option:
The C compiler generate the same file “a.out” for each and every file of C. To overcome this problem we use the “–o” option of “gcc”, which enables you to change the names of your program after compilation. To use this option, type the following command:
$ gcc –o test1 test1.c
Now you would get file named as “test1.c” instead of “a.out”. Hence, run the program like this:
$ ./test1.c
hello world
The warning options “–w”, “–Wall” & “-Werror”:
When we write down the C programs, some times warnings are generated when we compile the program and we manage them. The “gcc” provide some options to handle these warnings in UNIX and Linux.
Warnings are controlled by the “–w” and “–Wall” options. –Wall option enables all of the most common warnings. Suppose we write down a simple program:
#include
int main ( )
{
printf(“Hello world”);
}
Now compile the program using the –o and –Wall option:
$ gcc –Wall –o test1 test1.c
test1.c: warning: control reaches end of non void functions
Such warning is not generated if you are not using the “–Wall” option of “gcc”. But such option treats warning as a warning not an error. The “gcc” continuously compiled your program. It only abort compilation on errors.
To stop the compilation when a warning is detected, the “gcc” provide a useful option “–Werror”. It force the “gcc” to treat all warnings as errors and stop compilation if any warning is generated. Such option can be used as:
$ gcc –Wall –Werror –o test1 test1.c
Such option is particularly useful when using automated compilation. When “–Werror” is used, “gcc” will not finish the compilation if any warning is detected.
Optimization with gcc:
One of the most important feature of modern C compiler is optimizer. The optimizer is a part of the compiler that is capable of examining your code, identifying those areas that are suboptimal and rewriting them using code that does the same thing in less space or with better performance. Similarly “gcc” has a powerful and highly configurable optimizer that can be applied to your programs.
Optimization options “-01 or -0”, “-02” and “-03”:
“gcc” enable optimizations by using one of the -0 options. You can specify several different levels of optimization for “gcc”. If you simply use -0, this is taken as level one (or -01); -0 is same as -01. In general, you can go up to level three (or -03). Thus the command containing the optimization option is given below:
$ gcc –Wall -01 –o test1 test1.c
We can also use the -02 and -03 options in the above command. These option tells that how aggressive the gcc’s optimizer is. Higher the level of optimization means higher aggressive the gcc’s optimizer is. More aggressive optimization mean that your code runs faster.
Optimization pitfalls:
As we know that optimization give tremendous gain for our programs, but there are some pitfalls or limitations of optimization levels.
First one is, the more aggressive gcc becomes with optimizations, the longer it takes your program to compile. Therefore, some programmer compile program without optimization.
Second one, some option – especially option -03 – can increase the size of the generated program, which can hurt performance more than the gain from the more efficient code.
Finally, the debugging can be difficult when optimization is enabled. Because the optimizer can eliminate code that does not have a use in the final program, or rearrange some statements for better performance. Therefore, some programmer avoid optimization as much as possible when debugging is carried out in the program.
Many people prefer to compile their program with -02. This option often provides the best compromise between optimization strength, compile time and code size.
Debugging options:
“gcc” provide some debugging options that we can use during compilation of the C program. When we compile the C program using the debugging options, “gcc” inserts extra information into the object files (.o) and executable files that it generates. This extra information enables “gdb (GNU Debugger)” to determine the relationship between the compiled code and the lines in your source file. Without that information, gdb would not be able to determine which line of code your program is executing at any given time.
NOTE: These debug symbols or options are not compiled into your programs by default because of one important side effect; they increase the size of the executable.
To generate the debugging symbol, we use the “–g” option of “gcc”. It can be used in command as:
$ gcc –g –Wall –o test1 test1.c
You can also use the more higher option of debugger options. Like:
$ gcc –ggdb3 –Wall –o test1 test1.c
Such command instruct “gcc” to generate the debugging symbols with the “gdb” extensions. The 3 means that it ought to use level – 3 debugging information, the highest level possible.
Advanced gcc options: ~
Till now we are dealing with the options that control basic file generation, “gcc” provides us many other options that enable you to fine tune “gcc” operations. For instances, “gcc” provide us some option by which we can give path to include the files and libraries in our program, speeding up the compilation processes by pipelining, carried out the linking with the libraries, and viewing the hidden files that perform an important role in our project but they are abstract from the user. Let us discuss each of them.
(i) Specifying search path: ~
When building a project, “gcc” has a default search path to use for things like include files and libraries. But sometimes we need to specify the path of the file and library which is not mentioned in the default search path of the “gcc”. For instance:
Suppose you want to adding the search path for files and library in your “gcc’s” command, for this “gcc” provide the option –I, -L (Uppercase I and L). –I option is used to add the search path for the files and –L is used to add the search path for the library.
For example, you have a program “Punit.c” and you wants to include a file named scsi.h, and such file is stored in your system at /usr/include/scsi, which is not default search path of “gcc”. Therefore you might have:
$ gcc –wall –I/usr/include/scsi –o Punit Punit.c
This option (-I) and specified path would help the preprocessor to find the scsi.h file. The same concept will be included for libraries. If your program needs to link to the X11 library, then you may tell this to linker like this:
$ gcc –L/usr/X11r6/lib –wall –o Punit Punit.c –lX11
Note : Here we mention the path of the library using –L option and linking the library using –l option.
(ii) Linking with libraries: ~
When writing many programs, we will need to link with the libraries. These libraries can be anything from that implement mathematical function to ones that provide support for using a graphical interface in the X Window System. They can be either static or shared (dynamic); “gcc” can work with both.
The basic options to link in a library with your current program is -l ( a lowercase L). This option should be specified at the final link stage of your compile process, which brings together all the .o (object) files.
For instance, if you want to include the math library (math.h) in our program. Then, when compiling, you would need to link in the math library, names simply m. Therefore, a command such as the following would be appropriate:
$ gcc –Wall –o Punit Punit.c –lm
(iii) Speeding compilation with pipes: ~
When we build a program, the build process consist many steps- preprocessing, compilation, assembling and linking to the name a few. Normally, gcc handles many aspects of these tasks.
But by default, such performance of “gcc” is too slow before there are many temporary files included. For instance, “gcc” will create a temporary file holding the output of the preprocessor, another one with the output of compiler, and perhaps the output of the assembler. Hence reading and writing takes time.
The other way of communication that can be more efficient is - pipelining. With pipelines, several programs can be invoked at once, with the output from one being input of the other. For speeding up the process of compilation through pipelining, the “gcc” provide us a option called “–pipe”. Such option can be used in “gcc” command as:
$ gcc –pipe –wall -03 –o Punit Punit.c
(iv) Viewing the hidden file option “–v”:
All the interaction between the various build programs are normally hidden from view. Because there details are generally unimportant for the users. However, you can request the “gcc” to give the details of all these interaction between the various build process by using the –v option.
$ gcc –v –Wall -03 –o test2 test2.c
When you run this command, “gcc” displays a lot of details about its build process. “gcc” specifying the its version number and where it retrieved some build information.
NOTE: The UNIX and Linux OS provide the g++ compiler for the C++ programs. g++ is a GNU C++ compiler, perform the same function for C++ programs as gcc does for C programs
Electronic Mail
By using electronic mail, you can send messages, letters, and bio–data or any type of document to other users on the same system. We know that in the write command it is necessary that the recipient must be logged in and must not have refused message permission. As far as the mailing is concerned, it is not necessary that the recipient must be logged in. The mail sends messages to each other such that the messages get collected in a ‘mailbox’ somewhere in the system.
The user can see his terminal until he issues the necessary command to read it. Now there are two possibilities either the user is logged in or not. If the user is logged in then the system informs recipient of waiting mail at two different times:
First at the time when mail arrives while the recipient is busy in using UNIX commands. After finishing his current activity, he gets the following message immediately
You have mail.
If at this time the recipient does not read the mail, the user will get the message
You have mail.
again and again at the time of login until he reads his mail.
If the user is not logged in then his mail is collected in system mailbox. Each user on a system is assigned a system mailbox. A user’s system mailbox is file located in the
/ usr / spool / mail
directory. Let the user name is user1, who is going to receive your mail messages. The system mailbox of user1 would be
/ usr / spool / mail / user1
All mail received from different resources is appended to this mailbox file. The system mailbox contains mail that a user has received but has not yet read, deleted or saved to a file.
Additionally, each user is assigned a personal mailbox. All the messages, which have been read but not deleted or saved, are placed in user’s personal mailbox. The personal mailbox of the user user1 is
/ usr / user1 / mbox
The UNIX system provides the mail command that handles the sending and receiving of electronic mail. The general format of the mail command is
$ mail username
where username is the user–id of recipient. Let you, user3, want to send a mail to user1 then use the mail command as:
$ mail user1
We all know that Education begins with life.
Each time we read,
A seed is sown for the future.
Ctrl – d$
Like write command, the Ctrl – d is used to inform that the message is completed. The mail program now takes your message and passes it to the specified user. Obviously if user1 is logged in, he immediately get the message
You have mail.
after completely his current activity. Similarly you can send the same mail to more than one user by typing
$ mail user1 user5 user7
We all know that Education begins with life.
Each time we read,
A seed is sown for the future.
Ctrl – d$
If you want to mail a document written by you, say report.txt, then enter the following command at the prompt:
$ mail user1 user5 user7 < report.txt
In this the document report.txt is sent to three users uer1, user5 and user7.
You can read the mail, if any, by simply typing the mail as:
If you have no mail then the message “No mail” is displayed. But if you have some mail then just type mail as:
after this a list of messages in your current mailbox would be displayed. This list is followed by the mailbox prompt (&) to indicate waiting for input. You can issue several commands at this prompt. You can type a ? to view these mailbox commands.
UNIX provides several mailbox commands to customize your mail. Some of them are as:
- n displays nth message, where n stands for the message number
- d deletes current message
- d n1 n2 n3 deletes specified messages
- u undo last delete message
- u n1 n2 n3 undo specified messages
- l n1 n2 n3 prints specified mail on line printer
- s n1 n2 n3 file appends the specified messages to the file for later use
- r sends a reply to current message sender
- r2 sends a reply to message number 2
- . displays the current message
- $ displays the last message
- + displays next message, if it exists
- Enter displays next message, if it exists
These commands will be effective until you quit the mailbox prompt (&) by typing q as:
& q
Next time when you invoke mail command you would need to retype these commands. If you do not want to retype these commands each time you invoke the mail command, then it is necessary to store these commands in .mailrc file. The functional behavior of .mailrc command is similar to .exrc file to the vi editor program.
The .mailrc file is a hidden file which is executed each time you invoke the mail command.
If you want to look at the contents of your personal mailbox then invoke the mail command as:
$ mail –f
After this, your mailbox starts displaying the messages. If you have stored the mail in some other file, say myfile, then type
$ mail –f myfile
If you want to display the oldest message first then enter the following
$ mail –r
By default the latest mail is viewed first.
Lastly if some one tries to confuse you between the usage of write command and the mail command then do not worry. You can use write command to talk to the user directly if he is logged in and has not refused message permission. And if the user is not logged in then use the mail command to send messages.
Till now we have seen the write and mail commands which are invoked by the users. Now we will see those commands which are used by the superuser or system administrator only. Firstly we will see the wall command.
The wall Command
The wall command can be invoked by the superuser or system administrator only. The superuser can write to any terminal on the network by using the wall command. The wall command does not care that whether the terminals have written permissions to themselves or not.
It is the superuser who can compel you to read superuser messages. The superuser uses the wall command as:
# /etc/wall
Here note that the command prompt is # instead of $, in order to distinguish the superuser from normal users. Also note that you can not invoke the wall command as:
#wall
It is because here the system would search the current working directory first and then in the path specified. Since this file resides in a directory /etc, that’s why it is necessary to define the complete path as:
# /etc/wall
Let the super user wants to inform all the users that it is going to be shut down in 5 minutes, so please save their works as soon as possible. To achieve this the superuser uses the wall command as:
# /etc/wall
Hey, Wake Up.
The system is shutting down in 5 minutes.
Please save yoor work, if necessary.
Ctrl – d#
Like the write and mail commands, it is necessary to press Ctrl and d keys simultaneously to indicate that the message has been completed now.
Although you can also invoke the wall command at your terminal as:
$ /etc/wall
but unfortunately it would broadcast the message to nobody except yourself. Please try this command on your system and check the output.
The news Command
The news command is used to display the news made by the superuser or system administrator. The system administrator keeps different news in the different files in /usr/news directory. You can invoke these files as:
$ news
If you have three unread files, say sports, education and services, in news directory then
$ news
would display the contents of these files one by one on the monitor. The most recent news would be displayed first. You can also read a particular news file among several files. If you want to read education files then type
$ news education
Here once again the news is read by you, a file (.news_time of 0 bytes) is created in your home directory. The time of .news_time creation is used to find out when you have accessed the news. Now when you invoke the news command next time at dollar prompt ($), the UNIX system compares the time at at which the news item was posted and the creation of .news_time file.
If the news item has been posted before the creation .news_time file then it would not be displayed; otherwsie it would be displayed. To overcome this limitation, UNIX provides –a option as:
$ news –a
Now it would display all news items, whether current or not. The other options are as:
-n displays names of news item only, not their contents
-s reports the number of current news items
If you have read all the news items and try
$ news –s
then it promptly displays the message – “No news”.
The motd Command
The motd stands for ‘message of the day’. This command is also invoked by the superuser or the system administrator automatically when a user logs in. The motd program resides in /etc directory.
Reading the news depends on totally user’s choice. If you do not want to read news then there is no need to use news command. Opposite to this is the motd command. The motd command is executed automatically when a user logs in. As soon as you log in, the file /etc/profile gets executed automatically. This profile contains a command
cat /etc/motd
in it. The behavior of the /etc/profile is similar to Autoexec.bat file of DOS. The user has also his own .profile in his home directory in order to customize his working environment. When both these files exist then /etc/profile would be executed first and then .profile.
This script is a snippet which can be used to send an text email using perl programming.
This is useful code for the Perl programmer which needs to automate the application as well email application in the script itself
Just copy the below script and modify the email body along with sender and receiver inforamtion. you should be all set.
We are using sendmail command which is present in /usr/sbin.
Script
#!/usr/bin/perl
my $EMAIL_BODY="MY EMAIL BODY";
my $EMAIL_TO="
my $sendmail = "/usr/sbin/sendmail -t -v";
my $reply_to = "Reply-to: xx";
my $from = "From: xxa";
my $SUBJECT_AUX="Subject: Your Subjectn";
my $send_to = "To: ".$EMAIL_TO."n";
open(SENDMAIL, "|$sendmail") or die "Cannot open $sendmail: $!";
print SENDMAIL $send_to;
print SENDMAIL $from;
print SENDMAIL $reply_to;
print SENDMAIL $SUBJECT_AUX;
print SENDMAIL "Content-type: text/plainnn";
print SENDMAIL $EMAIL_BODY;
close(SENDMAIL);
The Evolution of Programming Languages
The first electronic computers appeared in the 1940's and were programmed in machine language by sequences of 0's and 1's that explicitly told the computer what operations to execute and in what order. The operations themselves were very low level: move data from one location to another, add the contents of two registers, compare two values, and so on. Needless to say, this kind of programming was slow, tedious, and error prone. And once written, the programs were hard to understand and modify.
The Move to Higher-level Languages
The first step towards more people-friendly programming languages was the development of mnemonic assembly languages in the early 1950's. Initially, the instructions in an assembly language were just mnemonic representations of machine instructions. Later, macro instructions were added to assembly languages so that a programmer could define parameterized shorthands for frequently used sequences of machine instructions. A major step towards higher-level languages was made in the latter half of the 1950's with the development of Fortran for scientific computation, Cobol for business data processing, and Lisp for symbolic computation.
The philosophy behind these languages was to create higher-level notations with which programmers could more easily write numerical computations, business applications, and symbolic programs. These languages were so successful that they are still in use today. In the following decades, many more languages were created with innovative features to help make programming easier, more natural, and more robust.Today, there are thousands of programming languages.
They can be classified in a variety of ways. One classification is by generation.
- First-generation languages are the machine languages.
- Second-generation the assembly languages.
- Third-generation the higher-level languages like Fortran, Cobol, Lisp, C, C++, C#, and Java.
- Fourth-generation languages are languages designed for specific applications like NOMAD for report generation, SQL for database queries, and Postscript for text formatting.
- The term fifth-generation language has been applied to logic- and constraint-based languages like Prolog and OPS5. Another classification of languages uses the term imperative for languages in which a program specifies how a computation is to be done and declarative for languages in which a program specifies what computation is to be done.
Languages such as C, C++, C#, and Java are imperative languages. In imperative languages there is a notion of program state and statements that change the state. Functional languages such as ML and Haskell and constraint logic languages such as Prolog are often considered to be declarative languages. The term von Neumann language is applied to programming languages whose computational model is based on the von Neumann computer architecture. Many of today's languages, such as Fortran and C are von Neumann languages.
An object-oriented language is one that supports object-oriented programming, a programming style in which a program consists of a collection of objects that interact with one another. Simula 67 and Smalltalk are the earliest major object-oriented languages. Languages such as C++, C#, Java, and Ruby are more recent ob ject-oriented languages. Scripting languages are interpreted languages with high-level operators designed for "gluing toget her" computations. These computations were originally called "scripts." Awk, JavaScript, Perl, PHP, Python, Ruby, and Tcl are popular examples of scripting languages. Programs written in scripting languages are often much shorter than equivalent programs written in languages like C.
Impacts on Compilers
Since the design of programming languages and compilers are intimately related, the advances in programming languages placed new demands on compiler writers. They had to devise algorithms and representations to translate and support the new language features. Since the 1940's computer architecture has evolved as well. Not only did the compiler writers have to track new language features, they also had to devise translation algorithms that would take maximal advantage of the new hardware capabilities. Compilers can help promote the use of high-level languages by minimizing the execution overhead of the programs written in these languages.
Compilers are also critical in making high-performance computer architectures effective on users' applications. In fact, the performance of a computer system is so dependent on compiler technology that compilers are used as a tool in evaluating architectural concepts before a computer is built. Compiler writing is challenging. A compiler by itself is a large program. Moreover, many modern language-processing systems handle several source languages and target machines within the same framework; that is, they serve as collections of compilers, possibly consisting of millions of lines of code. Consequently, good software-engineering techniques are essential for creating and evolving modern language processors.
A compiler must translate correctly the potentially infinite set of programs that could be written in the source language. The problem of generating the optimal target code from a source program is undecidable in general; thus, compiler writers must evaluate tradeoffs about what problems to tackle and what heuristics to use to approach the problem of generating efficient code. A study of compilers is also a study of how theory meets practice.
The purpose of this text is to teach the methodology and fundamental ideas used in compiler design. It is not the intention of this text to teach all the algorithms and techniques that could be used for building a st ate-of-the-art language-processing system. However, readers of this text will acquire the basic knowledge and understanding to learn how to build a compiler relatively easily.
The Science of Building a Compiler
Compiler design is full of beautiful examples where complicated real-world problems are solved by abstracting the essence of the problem mathematically. These serve as excellent illustrations of how abstractions can be used to solve problems: take a problem, formulate a mathematical abstraction that captures the key characteristics, and solve it using mathematical techniques. The problem formulation must be grounded in a solid understanding of the characteristics of computer programs, and the solution must be validated and refined empirically.
A compiler must accept all source programs that conform to the specification of the language; the set of source programs is infinite and any program can be very large, consisting of possibly millions of lines of code. Any transformation performed by the compiler while translating a source program must preserve the meaning of the program being compiled. Compiler writers thus have influence over not just the compilers they create, but all the programs that their compilers compile. This leverage makes writing compilers particularly rewarding; however, it also makes compiler development challenging.
Modeling in Compiler Design and Implementation
The study of compilers is mainly a study of how we design the right mathematical models and choose the right algorithms, while balancing the need for generality and power against simplicity and efficiency. Some of most fundamental models are finite-state machines and regular expressions. These models are useful for describing the lexical units of programs (keywords, identifiers, and such) and for describing the algorithms used by the compiler to recognize those units.
Also among the most fundamental models are context-free grammars, used to describe the syntactic structure of programming languages such as the nesting of parentheses or control constructs.
The Science of Code Optimization
The term "optimization" in compiler design refers to the attempts that a compiler makes to produce code that is more efficient than the obvious code. "Optimization" is thus a misnomer, since there is no way that the code produced by a compiler can be guaranteed to be as fast or faster than any other code that performs the same task. In modern times, the optimization of code that a compiler performs has become both more important and more complex. It is more complex because processor architectures have become more complex, yielding more opportunities to improve the way code executes. It is more important because massively parallel computers require substantial optimization, or their performance suffers by orders of magnitude.
With the likely prevalence of multicore machines (computers with chips that have large numbers of processors on them), all compilers will have to face the problem of taking advantage of multiprocessor machines. It is hard, if not impossible, to build a robust compiler out of "hacks." Thus, an extensive and useful theory has been built up around the problem of optimizing code. The use of a rigorous mathematical foundation allows us to show that an optimization is correct and that it produces the desirable effect for all possible inputs.
On the other hand, pure theory alone is insufficient. Like many real-world problems, there are no perfect answers. In fact, most of the questions that We ask in compiler optimization are undesirables. One of the most important Skills in compiler design are the ability to formulate the right problem to solve. We need a good understanding of the behavior of programs to start with and Thorough experimentation and evaluation to validate our intuitions. Compiler optimizations must meet the following design objectives:
- The optimization must be correct, that is, preserve the meaning of the Compiled program, The optimization must improve the performance of many programs. The compilation time must be kept reasonable, and The engineering effort required must be manageable.
- It is impossible to overemphasize the importance of correctness. It is trivial to write a compiler that generates fast code if the generated code need not Be correct! Optimizing compilers are so difficult to get right that we dare say That no optimizing compiler is completely error-free! Thus, the most important Objective in writing a compiler is that it is correct.
- The second goal is that the compiler must be effective in improving the performance Of many input programs. Normally, performance means the speed of the program execution. Especially in embedded applications, we may also wish to minimize the size of the generated code. And in the case of mobile devices, it is also desirable that the code minimizes power consumption. Typically, the same optimizations that speed up execution time also conserve power. Besides performance, usability aspects such as error reporting and debugging are also important.
- Third, we need to keep the compilation time short to support a rapid development and debugging cycle. This requirement has become easier to meet as machines get faster. Often, a program is first developed and debugged without program optimizations. Not only is the compilation time reduced, but more importantly, unoptimized programs are easier to debug, because the optimizations introduced by a compiler often obscure the relationship between the source code and the object code. Turning on optimizations in the compiler sometimes exposes new problems in the source program; thus testing must again be performed on the optimized code. The need for additional testing sometimes deters the use of optimizations in applications, especially if their performance is not critical.
- Finally, a compiler is a complex system; we must keep the system simple to assure that the engineering and maintenance costs of the compiler are manageable. There is an infinite number of program optimizations that we could implement, and it takes a nontrivial amount of effort to create a correct and effective optimization. We must prioritize the optimizations, implementing only those that lead to the greatest benefits on source programs encountered in practice.
Thus, in studying compilers, we learn not only how to build a compiler, but also the general methodology of solving complex and open-ended problems. The approach used in compiler development involves both theory and experimentation. We normally start by formulating the problem based on our intuitions on what the important issues are.
Shell Control Structures
Till now we have deal with sequential control instructions. In which the instruction are executed in a sequential order. But some times we have to choose the choice, like such happen when that is true and if not then this happen. This can be done by using the Decision control instruction. Like if
– else, case control structure (switch statement in C), String comparisons and many more.
Let us discuss each of them:
Use of if – else expression:
For using the if – else expression, the shell provide us a special command called “test”. The “test”
command can perform following types of tests:
1. Numerical tests
2. String tests
Numerical Tests:
The “test” command handles various numerical comparisons on integers, such as a number is greater than, less then and equal to and many more. The shell provides following set of operators:
(1) –eq equal to
(2) –ne not equal to
(3) –gt greater than
(4) –lt less than
(5) –ge greater than or equal to
(6) –le less than or equal to
Let us under the use of these operators by writing a shell script to find out that the both numbers are equal or not.
$cat > punit
# find both number are equal or not
echo Enter the two numbers
read a b
if test $a –eq $b
then
echo The two numbers are equal
else
echo The two numbers are not equal
fi
[Ctrl + d]
$_
When we execute the program we get:
Enter the two numbers
10 10
The two numbers are equal
OR
Enter the two numbers
10 11
The two numbers are not equal, there is an alternative method of “test” command is square brackets [ ]. Suppose we want to find the greatest number among the three.
$cat > punit
# find greatest number among the three
echo Enter three numbers
read a b c
big = $a
if [ $big –lt $b ]
then
$big = $b
fi
if [$big – lt $c ]
then
$big = $c
fi
echo Largest number is $big
[Ctrl + d]
$_
When we execute this script, we will get:
Enter three numbers
10 1 12
Largest number is c.
From this script it is clear that the command [ $big –lt $b ] is the same as “ test $big –lt $b” while using the square bracket in place of a test command. Also note that a blank space should be placed immediately after “[“ and immediately before “]”, otherwise it would be treated as an invalid expression.
String Tests:
A string is just a series of characters. It may be a filename, a command name, or the value of a variable. The shell provides following set of options to handle string tests:
(1) string1 = string2 true, if the strings are equal
(2) string1! = string2 true, if the strings are not equal
(3) –n string true, if the strings has non – zero value
(4) –z string true, if the strings has zero value
Let us see a shell script program in which we enter two strings and check that whether two strings are equal or not:
$cat > stringcheck
#comparison of two string
echo Enter two strings
read first second
if [ $first = $second ]
then
echo The two strings are equal
else
echo The two strings are not equal
fi
[Ctrl + d]
$_
When we execute that shell script we get:
Enter two strings
Punit Punit
The two strings are equal
OR
Enter two strings
Punit punit
The two strings are not equal Logical operators:
The shell provide us three types of logical operator while performing a test as:
(1) –a binary AND operator
(2) –o binary OR operator
(3) ! binary NOT operator
Let us understand these operators by using a shell script in which we have find out which number is greater among the three:
$cat > punit
# find greatest number among the three
echo Enter three numbers
read a b c
if [ $a –gt $b –a $a –gt $c ]
then
echo The greatest number is $a
elif [$b –gt $a –a $b –gt $c ]
then
echo The greatest number is $b
else
echo Largest number is $c
[Ctrl + d]
$_
When we execute the following shell script we get the following output:
Enter three numbers
10 1 12
The largest number is c
The Case Statements:
The case statement is used to select one option form a list of alternatives. In case statements we use two keywords “case” and “esac” (case spelt backward). The choice in case statements is represented as “choice1)” ,”choice2)”; here you can see that only right parenthesis is used. The right parenthesis ) is used to identify label name. *) is used for default case. Remember that double semicolon (;;) is encountered immediately when our choice is completed. The double semicolon transferred the control to the keyword “esac” that makes the end of the case statements. Actually double semicolon is not necessary after the final choice, but there is no harm to use it there.
Let us write down a simple program of case statement:
$cat > punit
#case statements
echo Enter any day number from 1 to 7
read no
case #no in
1) echo Monday;;
2) echo Tuesday;;
3) echo Wednesday;;
4) echo Thursday;;
5) echo Friday;;
6) echo Saturday;;
7) echo Sunday;;
*) The entered choice is wrong;;
esac
[Ctrl + d]
$_
After executing the shell script we get the following output
Enter any day number from 1 to 7
1
Monday
Enter any day number from 1 to 7
0
The entered choice is wrong
In the last example of case statement we have enter numbers to the value portion of the case statement and label cases. We can also use the words at “case choice” places: for example
$cat > punit
echo who
echo pwd
echo ls
echo date
echo Please enter the choice
read choice
case $choice in
who) who;;
pwd) pwd;;
ls) ls;;
date) date;;
*) You enter the wrong choice
esac
[Ctrl + d ]
$_
Output is something like that
who
pwd
ls
date
Please enter the choice
pwd
/usr/punit
And so on……
More Articles …
Page 7 of 13