Sed Command to Delete Lines: Sed command can be used to delete or remove specific lines which matches a given pattern or in a particular position in a file. Here we will see how to delete lines using sed command with various examples.
The following file contains a sample data which is used as input file in all the examples:
Sed Command to Delete Lines – Based on Position in File
In the following examples, the sed command removes the lines in file that are in a particular position in a file.
1. Delete first line or header line
The d option in sed command is used to delete a line. The syntax for deleting a line is:
Here N indicates Nth line in a file. In the following example, the sed command removes the first line in a file.
2. Delete last line or footer line or trailer line
The following sed command is used to remove the footer line in a file. The $ indicates the last line of a file.
3. Delete particular line
This is similar to the first example. The below sed command removes the second line in a file.
4. Delete range of lines
The sed command can be used to delete a range of lines. The syntax is shown below:
Here m and n are min and max line numbers. The sed command removes the lines from m to n in the file. The following sed command deletes the lines ranging from 2 to 4:
5. Delete lines other than the first line or header line
Use the negation (!) operator with d option in sed command. The following sed command removes all the lines except the header line.
6. Delete lines other than last line or footer line
7. Delete lines other than the specified range
Here the sed command removes lines other than 2nd, 3rd and 4th.
8. Delete first and last line
You can specify the list of lines you want to remove in sed command with semicolon as a delimiter.
9. Delete empty lines or blank lines
The ^$ indicates sed command to delete empty lines. However, this sed do not remove the lines that contain spaces.
Sed Command to Delete Lines – Based on Pattern Match
In the following examples, the sed command deletes the lines in file which match the given pattern.
10. Delete lines that begin with specified character
^ is to specify the starting of the line. Above sed command removes all the lines that start with character ‘u’.
11. Delete lines that end with specified character
$ is to indicate the end of the line. The above command deletes all the lines that end with character ‘x’.
12. Delete lines which are in upper case or capital letters
13. Delete lines that contain a pattern
14. Delete lines starting from a pattern till the last line
Here the sed command removes the line that matches the pattern fedora and also deletes all the lines to the end of the file which appear next to this matching line.
15. Delete last line only if it contains the pattern
Here $ indicates the last line. If you want to delete Nth line only if it contains a pattern, then in place of $ place the line number.
Note: In all the above examples, the sed command prints the contents of the file on the unix or linux terminal by removing the lines. However the sed command does not remove the lines from the source file. To Remove the lines from the source file itself, use the -i option with sed command.
If you dont wish to delete the lines from the original source file you can redirect the output of the sed command to another file.
If you like this article, then please share it or click on the google +1 button.
An experienced Linux user knows exactly what kind of a nuisance blank lines can be in a processable file. These empty/blank lines not only get in the way of correctly processing such files but also make it difficult for a running program to read and write the file.
On a Linux operating system environment, it is possible to implement several text manipulation expressions to get rid of these empty/blank lines from a file. In this article, empty/blank lines refer to the whitespace characters.
Create a File with Empty/Blank Lines in Linux
We need to create a reference file with some empty/blank lines. We will later amend it in the article through several techniques we will be discussing. From your terminal, create a text file of your choice with a name like “i_have_blanks” and populate it with some data and some blank spaces.
Create File with Blank Lines
Throughout the article, we will be outputting the contents of a file on our terminal using the cat command for flexible referencing.
View File Contents in Linux
The three Linux commands that will propel us towards an ideal solution to this empty/blank lines problem are grep, sed, and awk.
Therefore, create three copies of your i_have_blanks.txt file and save them with different names so that each can be accommodated by one of the stated three Linux commands.
Through regex (regular expressions), we can identify blank lines with the POSIX standard character “[:space:]” .
How to Remove Blank/Empty Lines in Files
With this problem statement, we are considering the elimination of all existing empty/blank lines from a given readable file using the following commands.
1. Delete Empty Lines Using Grep Command
Supported use of shorthand character classes can reduce the grep command to a simple one like:
To fix a file with blank/empty lines, the above output needs to pass through a temp file before overwriting the original file.
Remove Blank Lines in File Using Grep
As you can see, all the blank lines that spaced the content of this text file are gone.
2. Delete Empty Lines Using Sed Command
The d action in the command tells it to delete any existing whitespace from a file. This command’s blank line matching and deleting mechanism can be represented in the following manner.
The above command scans through the text file lines for non-blank characters and deletes all the other remaining characters. Through its non-blank character class support, the above command can be simplified to the following:
Also, because of the command’s in-place editing support, we don’t need a temp file to temporarily hold our converted file before overwriting the original text file like with the case of the grep command. You however need to use this command with an -i option as an argument.
Remove Blank Lines in File Using Sed
3. Delete Empty Lines Using Awk Command
The awk command runs a non-white character check on each line of a file and only prints them if this condition is true. The flexibility of this command comes with various implementation routes. Its straightforward solution is as follows:
The interpretation of the above command is straightforward, only the file lines that do not exist as whitespaces are printed. The longer version of the above command will look similar to the following:
Through awk non-blank character class support, the above command can also be represented in the following manner:
Remove Blank Lines in File Using Awk
The -d an option lets awk dump the final file lines on the system terminal. As you can see, the file no longer has whitespaces.
The three discussed and implemented solutions to dealing with blank lines from files through grep, sed, and awk commands will take us a long way into implementing stable and efficient read and write file operations on a Linux system.
As a Linux administrator, superuser, or beginner learning the ropes of the operating system ecosystem, it is essential to learn the Linux command line tweaks on how to deal with duplicate lines in a text file.
A common scenario that needs a quick and efficient approach to getting rid of these duplicate lines in a text file is when handling log files.
Log files are key/important in improving the user experience of Linux users through their computer-generated data. Such data can be translated as usage operations, activities, and patterns associated with a server, device, application, or system.
Handling such log files can be tedious due to the infinite number of information that continuously repeats itself. Therefore, sifting through such files is close to impossible.
Problem Statement
We will need a sample text file to reference throughout this article to make things more interesting and engaging:
Create File with Duplicate Lines
The above screen capture of the text file we just created contains some visibly repeated lines we will be trying to get rid of/delete. In a production-ready environment, such a file could be having thousands of lines making it difficult to get rid of the hidden duplicates.
Delete Duplicate Lines Using Sort and Uniq Commands
As per the man pages of these two GNU Coreutils commands, the sort command’s primary purpose is to sort lines within a text file while the uniq command’s primary purpose is to omit/report repeated lines within a targeted text file.
If we were to remove the duplicate lines from our text file using these two commands, we will run:
Delete Duplicate Lines in File
As expected, the duplicate entries have been deleted.
We can even redirect the output of the above commands to a file like final.txt.
Redirect Command Output to File
We can also use Linux’s sort command with the -u option to uniquely output the content of the file without duplicate lines.
To save the output to another file:
Sort File without Duplicate Lines
How to Find Most Repeated Lines in File
Sometimes you might be curious about identifying the most repeated lines in the text file. In such a case we will use two sort commands and a single uniq command.
Find Most Repeated Lines in File
Remove Duplicate Lines Using Awk Command
The awk command is part of the Free Software Foundation package and is primarily used for pattern scanning and processing language. Its approach gets rid of the duplicate lines on your text file without affecting its previous order.
Remove Duplicate Lines in File
To save the output to another file:
Redirect Command Output to File
We have successfully learned how to remove duplicate lines in a text file from the Linux operating system terminal environment.
How would I use sed to delete all lines in a text file that contain a specific string?
20 Answers 20
To remove the line and print the output to standard out:
To directly modify the file – does not work with BSD sed:
Same, but for BSD sed (Mac OS X and FreeBSD) – does not work with GNU sed:
To directly modify the file (and create a backup) – works with BSD and GNU sed:
There are many other ways to delete lines with specific string besides sed :
Ruby (1.9+)
Shell (bash 3.2 and later)
GNU grep
And of course sed (printing the inverse is faster than actual deletion):
You can use sed to replace lines in place in a file. However, it seems to be much slower than using grep for the inverse into a second file and then moving the second file over the original.
The first command takes 3 times longer on my machine anyway.
The easy way to do it, with GNU sed :
You may consider using ex (which is a standard Unix command-based editor):
- + executes given Ex command ( man ex ), same as -c which executes wq (write and quit)
- g/match/d – Ex command to delete lines with given match , see: Power of g
The above example is a POSIX-compliant method for in-place editing a file as per this post at Unix.SE and POSIX specifications for ex .
The difference with sed is that:
sed is a Stream EDitor, not a file editor. BashFAQ
Unless you enjoy unportable code, I/O overhead and some other bad side effects. So basically some parameters (such as in-place/ -i ) are non-standard FreeBSD extensions and may not be available on other operating systems.
I was struggling with this on Mac. Plus, I needed to do it using variable replacement.
sed -i ” “/$pattern/d” $file
where $file is the file where deletion is needed and $pattern is the pattern to be matched for deletion.
I picked the ” from this comment.
The thing to note here is use of double quotes in “/$pattern/d” . Variable won’t work when we use single quotes.
You can also use this:
Here -v will print only other than your pattern (that means invert match).
To get a inplace like result with grep you can do this:
I have made a small benchmark with a file which contains approximately 345 000 lines. The way with grep seems to be around 15 times faster than the sed method in this case.
I have tried both with and without the setting LC_ALL=C, it does not seem change the timings significantly. The search string (CDGA_00004.pdbqt.gz.tar) is somewhere in the middle of the file.
Here are the commands and the timings:
Delete lines from all files that match the match
The first command edits the file(s) inplace (-i).
The second command does the same thing but keeps a copy or backup of the original file(s) by adding .bk to the file names (.bk can be changed to anything).
You can also delete a range of lines in a file. For example to delete stored procedures in a SQL file.
sed ‘/CREATE PROCEDURE.*/,/END ;/d’ sqllines.sql
This will remove all lines between CREATE PROCEDURE and END ;.
I have cleaned up many sql files withe this sed command.
echo -e “/thing_to_delete\ndd\033:x\n” | vim file_to_edit.txt
Just in case someone wants to do it for exact matches of strings, you can use the -w flag in grep – w for whole. That is, for example if you want to delete the lines that have number 11, but keep the lines with number 111:
It also works with the -f flag if you want to exclude several exact patterns at once. If “blacklist” is a file with several patterns on each line that you want to delete from “file”:
to show the treated text in console
to save treated text into a file
to append treated text info an existing file
to treat already treated text, in this case remove more lines of what has been removed
the | more will show text in chunks of one page at a time.
Curiously enough, the accepted answer does not actually answer the question directly. The question asks about using sed to replace a string, but the answer seems to presuppose knowledge of how to convert an arbitrary string into a regex.
Many programming language libraries have a function to perform such a transformation, e.g.
But how to do it on the command line?
Since this is a sed-oriented question, one approach would be to use sed itself:
So given an arbitrary string $STRING we could write something like:
or as a one-liner:
with variations as described elsewhere on this page.
Remove a specific line or a number of lines from a file.
This should be implemented as a routine that takes three parameters (filename, starting line, and the number of lines to be removed).
For the purpose of this task, line numbers and the number of lines start at one, so to remove the first two lines from the file foobar.txt, the parameters should be: foobar.txt, 1, 2
Empty lines are considered and should still be counted, and if the specified line is empty, it should still be removed.
An appropriate message should appear if an attempt is made to remove lines beyond the end of the file.
Contents
- 1 11l
- 2 Ada
- 3 ALGOL 68
- 4 Amazing Hopper
- 5 AutoHotkey
- 6 AWK
- 7 BASIC
- 7.1 IS-BASIC
- 8 C
- 9 C#
- 10 C++
- 11 Clojure
- 12 Common Lisp
- 13 D
- 14 Delphi
- 14.1 Using TStringDynArray
- 14.2 Using TStringList
- 15 ECL
- 16 Elixir
- 17 Erlang
- 18 F#
- 19 Fortran
- 20 FreeBASIC
- 21 Frink
- 22 Gambas
- 23 Go
- 24 Groovy
- 25 Haskell
- 26 Icon and Unicon
- 27 J
- 28 Java
- 29 jq
- 30 Julia
- 31 Kotlin
- 32 Lasso
- 33 Liberty BASIC
- 34 Lua
- 35 Mathematica/Wolfram Language
- 36 NewLISP
- 37 Nim
- 38 OCaml
- 39 Oforth
- 40 Pascal
- 41 Perl
- 42 Phix
- 43 Phixmonti
- 44 PicoLisp
- 45 PowerBASIC
- 46 PowerShell
- 47 PureBasic
- 48 Python
- 49 Racket
- 50 Raku
- 51 REXX
- 52 Ring
- 53 Ruby
- 54 Run BASIC
- 55 Rust
- 56 Scala
- 57 Seed7
- 58 SenseTalk
- 59 Sidef
- 60 Snobol4
- 61 Stata
- 62 Tcl
- 63 TUSCRIPT
- 64 UNIX Shell
- 65 VBA
- 66 VBScript
- 67 Wren
- 68 zkl
11l [ edit ]
Ada [ edit ]
ALGOL 68 [ edit ]
Amazing Hopper [ edit ]
Programming with macro-Hopper:
AutoHotkey [ edit ]
with test.txt starting as
Running the code it is now
AWK [ edit ]
BASIC [ edit ]
Compatible with VB-DOS, QBasic, QuickBASIC 4.5, PDS 7.1, QB64
Verbose and with a program loop to test the procedures.
IS-BASIC [ edit ]
C [ edit ]
C# [ edit ]
C++ [ edit ]
Clojure [ edit ]
Simple solution dealing with most of the lines in memory.
More complex solution for big file, one line at a time.
Common Lisp [ edit ]
D [ edit ]
Delphi [ edit ]
Using TStringDynArray [ edit ]
Using TStringList [ edit ]
ECL [ edit ]
Implemented for HPCC logical files, not single physical files, since all datasets in HPCC are distributed.
And a simple test case to run:
Elixir [ edit ]
Erlang [ edit ]
With “foobar.txt” that looks like this:
The resulting contents are:
F# [ edit ]
Fortran [ edit ]
The proper approach is to copy the source file to an output file with modifications made along the way, then when all has gone well, delete (or better, rename) the source file and change the output file’s name to become the original name. Otherwise, data loss is being risked. This name juggling of course invites collisions. Not a problem with Fortran, because the language provides no mechanism for changing the names of files. However, the fallback method is to copy the file to a temporary file (with modifications along the way) and then overwrite the source file from the temporary file. This runs the risk of there being some mishap during the interval when no version of the original file exists, so one could make a copy of the original as well – remembering that file names can’t be changed.
As always, there is a problem with the length of a piece of string. The allowance here is 66666 characters. Some filesystems know the length of records, and may make the maximum record length available to enquiry, but others don’t know and use instead a marker which might be CR, CRLF, LF or LFCR and even a mixture in the same file. The Fortran programme does not see this with FORMATTED input, just the content of the record, character style. Output, on a windows/DOS system, will always terminate records with CRLF. A null record would be after the first CRLF in the sequence CRLFCRLF, and so on. With UNFORMATTED, the programme must make its own decisions.
When run on file foobar.txt containing
The result is file foobar.txt containing
And if run afresh, the file is unharmed and there appears output to the screen:
Formulating error messages is tedious, in the absence of a function such as IFMT(n) to be used in CALL CROAK(“First record must be positive, not “//IFMT(IST)) A more accomplished programme would worry about running out of disc space (signalled by the taking of the END=label option in a WRITE statement) and I/O errors along the way using the ERR=label option, but it is difficult to devise recovery schemes for unexpected errors. Similarly, the OPEN statements are at risk of confronting a file that is available for READ, but not for WRITE. It would help in organising all this if OPEN(. ) was a function, but instead one can refer to the recondite IOSTAT error codes, possibly with assistance as with
Finally, there is a chance the operating system can be asked to do this, by fragmenting the file in-place into odd-sized pieces. The first piece would be all the records up to the chop, and the second would be all records after resumption. The advantage here is that the rest of the file need no longer be read and written, and files can be large.
This is a classic article written by Surendra Anne from the Linux.com archives. For more great SysAdmin tips and techniques check out our Essentials of Linux System Administration course!
Many people know about cat command which is useful in displaying entire file content. But in some cases we have to print part of file. In today’s post we will be talking about head and tail commands, which are very useful when you want to view a certain part at the beginning or at the end of a file, specially when you are sure you want to ignore the rest of the file content.
let’s start with the tail command, and explore all of the features this handy command can provide and see how to use it best to suit your needs. After that we will show some options that you can do and can not do with the head command.
Linux Tail Command Syntax
Tail is a command which prints the last few number of lines (10 lines by default) of a certain file, then terminates.
Example 1: By default “tail” prints the last 10 lines of a file, then exits.
Example :
as you can see, this prints the last 10 lines of /var/log/messages.
Example 2: Now what about you are interested in just the last 3 lines of a file, or maybe interested in the last 15 lines of a file. this is when the -n option comes handy, to choose specific number of lines instead of the default 10.
Example :
Example 3: We can even open multiple files using tail command with out need to execute multiple tail commands to view multiple files. Suppose if you want to see first two lines of a
Example:
Example 4: Now this might be by far the most useful and commonly used option for tail command. Unlike the default behaviour which is to end after printing certain number of lines, the -f option “which stands for follow” will keep the stream going. It will start printing extra lines on to console added to the file after it is opened. This command will keep the file open to display updated changes to console until the user breaks the command.
Example :
As you can see in this example, I wanted to start the crond service, then watch the /var/log/cron log file as service starts. I used ; which a kind of command chaining in Linux in order to execute two commands in single line. I am not interested in just a few number of lines then exit, but moreover I am interested in keeping watching the whole log file till service starts, then break it with CTRL+C.
Example 5: The same tail -f command can be replicated using less command well. Once you open a file with less
less /path/to/filename
Once you open file, then press shift+f
In order to come out from update mode in less, you have to press ctrl+c and then press q for quit.
Example 6: We have other option -s which should always be used with -f” will determine the sleep interval, whereas tail -f will keep watching the file, the refresh rate is each 1 second, if you wish to control this, then you will have to use the -s option “sleep” and specify the sleep interval
Example :
Example 7: As we seen in example 3, We can open more files using tail command. Even we can view 2 files at the same time growing using -f option as well. It will also print a header viewing which file is showing this output. the header line will be beginning with “==>”
Example:
Example 8: If you want to remove this header, use the -q option for quiet mode.
Example :
Example 9: Now what if I have a very huge /var/log/messages and I am only interested in the last certain number of bytes of data, the -c option can do this easily. observe the below example where I want to view only the last 500 bytes of data from /var/log/messages
Example :
Now, since we have been talking for a while about tail, lets talk about “head” command.
Head command in Linux
Head command will obviously on the contrary to tail, it will print the first 10 lines of the file. Till this part of the post, the head command will do pretty much the same as tail in all previous examples, with exception to the -f option, there is no -f option in head, which is very natural since files will always grow from the bottom.
Head Command Syntax In Linux
Example 10: As mention earlier print first 10 lines.
Example 11: Print first two lines of a file.
Example 12: this option lets you print all lines starting from a line number you specify, unlike Example 11 which will show you the first number of lines you provided.
Example :
As you can notice, in this example, it printed all the lines starting after line 27.
Combine Head And Tail Command In Linux
Example 13: As tail and head commands print different parts of files in an effective way, we can combine these two to print some advanced filtering of file content. To print 15th line to 20th line in /etc/passwd file use below example.
Output:
Example 14: Many people do not suggest above method to print from one line to other line. The above example is to show how we can combine these things. If you really want to print a particular line, use sed command as shown below.
Example:
Ready to continue your Linux journey? Check out our Essentials of Linux System Administration course!
Trying to debug an issue with a server and my only log file is a 20GB log file (with no timestamps even! Why do people use System.out.println() as logging? In production?!)
Using grep, I’ve found an area of the file that I’d like to take a look at, line 347340107.
Other than doing something like
. which would require head to read through the first 347 million lines of the log file, is there a quick and easy command that would dump lines 347340100 – 347340200 (for example) to the console?
update I totally forgot that grep can print the context around a match . this works well. Thanks!
19 Answers 19
I found two other solutions if you know the line number but nothing else (no grep possible):
Assuming you need lines 20 to 40,
When using sed it is more efficient to quit processing after having printed the last line than continue processing until the end of the file. This is especially important in the case of large files and printing lines at the beginning. In order to do so, the sed command above introduces the instruction 41q in order to stop processing after line 41 because in the example we are interested in lines 20-40 only. You will need to change the 41 to whatever the last line you are interested in is, plus one.
method 3 efficient on large files
fastest way to display specific lines
with GNU-grep you could just say
No there isn’t, files are not line-addressable.
There is no constant-time way to find the start of line n in a text file. You must stream through the file and count newlines.
Use the simplest/fastest tool you have to do the job. To me, using head makes much more sense than grep , since the latter is way more complicated. I’m not saying ” grep is slow”, it really isn’t, but I would be surprised if it’s faster than head for this case. That’d be a bug in head , basically.
I didn’t test it, but I think that would work.
I prefer just going into less and
- typing 5 0 % to goto halfway the file,
- 43210 G to go to line 43210
- :43210 to do the same
and stuff like that.
Even better: hit v to start editing (in vim, of course!), at that location. Now, note that vim has the same key bindings!
You can use the ex command, a standard Unix editor (part of Vim now), e.g.
display a single line (e.g. 2nd one):
corresponding sed syntax: sed -n ‘2p’ file.txt
range of lines (e.g. 2-5 lines):
sed syntax: sed -n ‘2,5p’ file.txt
from the given line till the end (e.g. 5th to the end of the file):
sed syntax: sed -n ‘2,$p’ file.txt
multiple line ranges (e.g. 2-4 and 6-8 lines):
sed syntax: sed -n ‘2,4p;6,8p’ file.txt
Above commands can be tested with the following test file:
- + or -c followed by the command – execute the (vi/vim) command after file has been read,
- -s – silent mode, also uses current terminal as a default output,
- q followed by -c is the command to quit editor (add ! to do force quit, e.g. -scq! ).
I’d first split the file into few smaller ones like this
and then grep on the resulting files.
If your line number is 100 to read
sed will need to read the data too to count the lines. The only way a shortcut would be possible would there to be context/order in the file to operate on. For example if there were log lines prepended with a fixed width time/date etc. you could use the look unix utility to binary search through the files for particular dates/times
Here you will get the line number where the match occurred.
Now you can use the following command to print 100 lines
or you can use “sed” as well
With sed -e ‘1,N d; M q’ you’ll print lines N+1 through M. This is probably a bit better then grep -C as it doesn’t try to match lines to a pattern.
Building on Sklivvz’ answer, here’s a nice function one can put in a .bash_aliases file. It is efficient on huge files when printing stuff from the front of the file.
To display a line from a
by its
, just do this:
If you want a more powerful way to show a range of lines with regular expressions — I won’t say why grep is a bad idea for doing this, it should be fairly obvious — this simple expression will show you your range in a single pass which is what you want when dealing with
20GB text files:
(tip: if your regex has / in it, use something like m! ! instead)
This would print out starting with the line that matches up until (and including) the line that matches .
It doesn’t take a wizard to see how a few tweaks can make it even more powerful.
Last thing: perl, since it is a mature language, has many hidden enhancements to favor speed and performance. With this in mind, it makes it the obvious choice for such an operation since it was originally developed for handling large log files, text, databases, etc.
How do I find the nth line in a file in Linux command line? How do I display line number x to line number y?
In Linux, there are several ways to achieve the same result. Printing specific lines from a file is no exception.
To display 13th line, you can use a combination of head and tail:
Or, you can use sed command:
To display line numbers from 20 to 25, you can combine head and tail commands like this:
Or, you can use the sed command like this:
Detailed explanation of each command follows next. I’ll also show the use of awk command for this purpose.
Display specific lines using head and tail commands
This is my favorite way of displaying lines of choice. I find it easier to remember and use.
Print a single specific line
Use a combination of head and tail command in the following function the line number x:
You can replace x with the line number you want to display. So, let’s say you want to display the 13th line of the file.
Explanation: You probably already know that the head command gets the lines of a file from the start while the tail command gets the lines from the end.
The “head -x” part of the command will get the first x lines of the files. It will then redirect this output to the tail command. The tail command will display all the lines starting from line number x.
Quite obviously, if you take 13 lines from the top, the lines starting from number 13 to the end will be the 13th line. That’s the logic behind this command.
Print specific range of lines
Now let’s take our combination of head and tail commands to display more than one line.
Say you want to display all the lines from x to y. This includes the xth and yth lines also:
Let’s take a practical example. Suppose you want to print all the the lines from line number 20 to 25:
Use SED to display specific lines
The powerful sed command provides several ways of printing specific lines.
For example, to display the 10th line, you can use sed in the following manner:
The -n suppresses the output while the p command prints specific lines. Read this detailed SED guide to learn and understand it in detail.
To display all the lines from line number x to line number y, use this:
Use AWK to print specific lines from a file
The awk command could seem complicated and there is surely a learning curve involved. But like sed, awk is also quite powerful when it comes to editing and manipulating file contents.
NR denotes the ‘current record number’. Please read our detailed AWK command guide for more information.
To display all the lines from x to y, you can use awk command in the following manner:
It follows a syntax that is similar to most programming language.
I hope this quick article helped you in displaying specific lines of a file in Linux command line. If you know some other trick for this purpose, do share it with the rest of us in the comment section.
I guess everyone knows the useful Linux cmd line utilities head and tail . head allows you to print the first X lines of a file, tail does the same but prints the end of the file. What is a good command to print the middle of a file? something like middle –start 10000000 –count 20 (print the 10’000’000th till th 10’000’010th lines).
I’m looking for something that will deal with large files efficiently. I tried tail -n 10000000 | head 10 and it’s horrifically slow.
11 Answers 11
You might be able to speed that up a little like this:
In those commands, the option -n causes sed to “suppress automatic printing of pattern space”. The p command “print[s] the current pattern space” and the q command “Immediately quit[s] the sed script without processing any more input. ” The quotes are from the sed man page.
By the way, your command
starts at the ten millionth line from the end of the file, while your “middle” command would seem to start at the ten millionth from the beginning which would be equivalent to:
The problem is that for unsorted files with variable length lines any process is going to have to go through the file counting newlines. There’s no way to shortcut that.
If, however, the file is sorted (a log file with timestamps, for example) or has fixed length lines, then you can seek into the file based on a byte position. In the log file example, you could do a binary search for a range of times as my Python script here* does. In the case of the fixed record length file, it’s really easy. You just seek linelength * linecount characters into the file.
* I keep meaning to post yet another update to that script. Maybe I’ll get around to it one of these days.
On Linux, you can do a single task in several ways. Likewise, if you want to count the number of lines in single or multiple files, you can use different commands. In this article, I’ll share five different ways including that you can use to print a total number of lines in a large file.
1. Count Number Of Lines Using wc Command
As wc stands for “word count“, it is the most suitable and easy command that has the sole purpose of counting words, characters, or lines in a file.
Let’s suppose you want to count the number of lines in a text file called distros.txt.
View File Contents
You can use “-l” or “–line” option with wc command as follows:
Count Lines in File
You can see that wc also counts the blank line and print the number of lines along with the filename. In case, you want to display only the total number of lines, you can also hide the filename by redirecting the content of the file to wc using a left-angle bracket ( instead of passing the file as a parameter.
Print Total Lines in File
Moreover, to display a number of lines from more than one file at the same time, you need to pass the filenames as arguments separated by space.
Count Lines in Multiple Files
In another way, you can also make use of the cat command to redirect the file content to the wc command as input via pipe (‘|’) .
Redirect File Content
Though it will also count the number of lines in a file, here the use of cat seems redundant.
2. Count Number Of Lines Using Awk Command
Awk is a very powerful command-line utility for text processing. If you already know awk, you can use it for several purposes including counting the number of lines in files.
However, mastering it may take time if you’re at a beginner level. Hence, if you just want to use it to count the total number of lines in a file, you can remember the following command:
Count Lines in File Using Awk
Here, NR is the number of records or say line numbers in a file being processed at the END section.
3. Count Number Of Lines Using Sed Command
Sed is a also very useful tool for filtering and editing text. More than a text stream editor, you can also use sed for counting the number of lines in a file using the command:
Count Lines in File Using Sed
Here, ‘=’ prints the current line number to standard output. So, combining it with the -n option, it counts the total number of lines in a file passed as an argument.
4. Count Number Of Lines Using Grep Command
Using yet another useful pattern search command grep, you can get the total number of lines in a file using ‘-e’ or ‘–regexp’ and ‘-c’ or ‘–count’ options.
Count Lines in File Using Grep
Here, ‘$’ is a regular expression representing the end of a line and the ‘^’ start of a line. You can use either of the regular expression.
5. Count Number Of Lines Using nl and cat Commands
Instead of directly getting the total no of lines, you can also print the file content and get the total number of lines by peeking at the last line number. For such purpose, nl is a simple command to print data with numbered lines.
Print Numbering Lines in File
For large files, it does not seem like a suitable method to display all data in a terminal. So what you can also do is pipe the data to tail command to just print only some of the last numbered lines.
List Numbering Lines in File
Likewise, a cat command with ‘-n’ can also be used to print file content with line numbers.
Print File Content With Line Numbers
Conclusion
After learning five ways to count a number of lines, you must be wondering which is the best way for you? In my opinion, whether you’re a beginner or advanced user of Linux, the wc command is the easiest and fastest approach.
However, if you’re in process of learning other powerful tools like grep, awk, and sed, you can also practice these commands for such purpose to hone your skills.
Linux offers really good text processing and editing tools. One of these tools is the uniq command. The uniq command helps you detect and delete adjacent occurrences of the same line. That means it deals with repetitions of sentences in a piece of text.
Table of Contents
Using the uniq command in Linux
In this tutorial we will see how the uniq command works. Let’s get started.
1. Create a sample text file
We will create a sample text file with a few repeated lines.
The text for the file is given below :
Here the first two lines are the same. The third line is different and the remaining lines are all alike.
To create a file with this text use the cat command.
2. Using uniq to delete repeated lines from the text
To delete repeated lines from the text, use :
There are no repetitions in the text anymore. As you can see, the output displays line 1 and line 2 as unique lines even though the content is the same. That’s because Linux is case sensitive.
3. Get a count of the number of repetitions
To count the number of repetitions use the following line of code :
Output :
The output contains lines from the text with the count at the beginning.
4. Only print the repeated lines
The uniq command gives you an option to only print the lines that occur more than once. To print only the repeated lines use :
Output :
5. Only print the non-repeated lines
This is the opposite of the example above. When you use the -u flag along with the uniq command then only the lines that occur once are printed. To print only the non-repeated lines use :
6. How to delete repeated lines that don’t occur together?
If you want to delete multiple occurrences of a line that don’t occur together then you can sort the text first.
For example, consider the text given below :
Let’s see what happens when we run the uniq command on this text.
Output :
There is no change in the text. Let’s use sorting so that same lines occur together.
We can sort the file and store the output in another file with the use of the sort command:
After sorting the text looks like this :
Now we can use the uniq command to delete the repeated lines.
We can also count the number of occurrences for each line.
7. How to store the output to a file?
When you run the uniq command on a file, the contents of the original file are not modified. To save the output of the uniq command you can redirect it to a file. You can do that using :
Conclusion
This tutorial was about uniq command in Linux. We learned how to use this command for deleting repeated occurrences of a line. Hope you had fun learning with us! You can learn more about the uniq command using the man command.
It is the complementary of head command.The tail command, as the name implies, print the last N number of data of the given input. By default it prints the last 10 lines of the specified files. If more than one file name is provided then data from each file is precedes by its file name.
Syntax:
Let us consider two files having name state.txt and capital.txt contains all the names of the Indian states and capitals respectively.
Without any option it display only the last 10 lines of the file specified.
Example:
Options:
1. -n num: Prints the last ‘num’ lines instead of last 10 lines. num is mandatory to be specified in command otherwise it displays an error. This command can also be written as without symbolizing ‘n’ character but ‘-‘ sign is mandatory.
Tail command also comes with an ‘+’ option which is not present in the head command. With this option tail command prints the data starting from specified line number of the file instead of end. For command: tail +n file_name, data will start printing from line number ‘n’ till the end of the file specified.
2. -c num: Prints the last ‘num’ bytes from the file specified. Newline count as a single character, so if tail prints out a newline, it will count it as a byte. In this option it is mandatory to write -c followed by positive or negative num depends upon the requirement. By +num, it display all the data after skipping num bytes from starting of the specified file and by -num, it display the last num bytes from the file specified.
Note: Without positive or negative sign before num, command will display the last num bytes from the file specified.
3. -q: It is used if more than 1 file is given. Because of this command, data from each file is not precedes by its file name.
4. -f: This option is mainly used by system administration to monitor the growth of the log files written by many Unix program as they are running. This option shows the last ten lines of a file and will update when new lines are added. As new lines are written to the log, the console will update with the new lines. The prompt doesn’t return even after work is over so, we have to use the interrupt key to abort this command. In general, the applications writes error messages to log files. You can use the -f option to check for the error messages as and when they appear in the log file.
5. -v: By using this option, data from the specified file is always preceded by its file name.
6. –version: This option is used to display the version of tail which is currently running on your system.
Applications of tail Command
1. How to use tail with pipes(|): The tail command can be piped with many other commands of the unix. In the following example output of the tail command is given as input to the sort command with -r option to sort the last 7 state names coming from file state.txt in the reverse order.
It can also be piped with one or more filters for additional processing. Like in the following example, we are using cat, head and tail command and whose output is stored in the file name list.txt using directive(>).
What is happening in this command let’s try to explore it. First cat command gives all the data present in the file state.txt and after that pipe transfers all the output coming from cat command to the head command. Head command gives all the data from start(line number 1) to the line number 20 and pipe transfer all the output coming from head command to tail command. Now, tail command gives last 5 lines of the data and the output goes to the file name list.txt via directive operator.
2. Print line between M and N lines
This article is contributed by Akash Gupta. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected] See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
Keeping in view the importance of sed command; our today’s guide will explore several ways to remove special characters using sed command in Ubuntu.
The syntax of sed command is written below:
Syntax
Special characters may sometimes be a need of the content that is written in a text file but if they are used unnecessarily, they will make the file messy and there are chances the reader may not pay attention, thus resulting in a purposeless document.
How to use sed to remove special characters in Ubuntu
This section will briefly describe the ways to remove special characters from a text file using sed; it depends on number of characters in your file that you want to remove; there can be two possibilities while removing the characters from a file, either you want to remove a single special character, or you want to remove multiple characters at once. From these possibilities indicated above, we have extended this section to two methods that will address both possibilities:
Method 1: How to remove a single character using sed
Method 2: How to remove multiple characters at once using sed
The first method addresses the first possibility, and the second possibility will be discussed in Method 2, let’s dig into them one by one:
Method 1: How to remove a single special character using sed
We have created a text file “ch.txt” that contains few special characters on different lines; the content inside the file is displayed below:
You can notice that the content inside “ch.txt” is difficult to read; For instance, we want to remove character “#” from the text file; for this, we have to use the following command to remove “#” from the whole document:
Moreover, if you want to remove the special character from specific line; for that, you must insert the line number alongside “s” keyword as the below mentioned command will remove “#” from line number 3 only:
Method 2: How to remove multiple characters at once using sed
Now we have another file “file.txt” that contains more than one type of character and we want to remove them in a single go. in this method the syntax is changed little bit from above command; For example, we have to remove five characters “#$%*@” from “file.txt”;
Firstly, look at the content of “file.txt” as the words are interrupted by these characters;
the command stated below will assist to remove all these special characters from “file.txt”:
Here we can draw another example, let’s say we want to remove only a few characters from specific lines.
We have created a new file and the content of the “newfile.txt” is shown below:
For this, we have written command that will delete “#@” and “%*” from lines 2 and 3 of “newfile.txt” respectively.
The sed command used in above methods will display the result only on the terminal rather than applying the changes in the text file: for that, we must use the “-i” option of sed command. It can be used with any sed command and the changes will be made to the file instead of printing on the terminal.
Conclusion
Apparently, the sed command acts as a usual text editor but it has a far more extensive list of actions as compared to other editors. You have to just write a command and the changes will be made automatically; this feature attracts the Linux enthusiasts or the users who prefer terminal over GUI. Following the advantageous functionalities of sed; our guide is focused on removing special characters from the text file. If we compare only this feature of sed command with other editors, you have to search for characters throughout the file and then removing them one by one is a tedious process. On the other hand, sed performs the same action by writing a single line command on terminal.
In UNIX and Linux-type operating systems, the log is a file that records each action of the operating system. Whenever a user login to the system, it saves the record in the log file. It also allows the user to add any content to the file.
For this, the term “logger” is the command-line tool that provides a shell command interface and gives the user an easy approach to add logs in the /var/log/syslog files. You can add entries into the log files using the “logger” command.
The syntax of this command-line utility is:
How to Use logger Command with Options:
The “logger” command is a pre-built tool in Linux systems. Using this command, users can perform various functions with different options:
Print “syslog” file:
The syslog file plays an important role in Linux distributions as it stores all the log data in the /var/log directory.
To view the syslog file in the terminal, execute the following tail command:
Specify the syslog Lines:
The “tail” is used to capture the record from syslog files and print it in the terminal. By default, when a tail command is executed, it prints the last 10 log lines of a file. But we can also specify the number of log lines to print:
Add log into syslog file:
Add any comment in the syslog file through the “logger” command without passing any option.
Run the “tail” command to print it on the terminal:
Log “who” Command:
The “logger” command can also be used to add the standard output of any command. Type the “who” with logger command to add it in the syslog file:
Display it with the tail command:
Log Specified File:
The “logger” command allows the user to add the content of a specified file into the syslog file using the “-f” option.
Let’s create a file named “test_file1.txt” and add some text to it:
Now, to print the file log in the terminal, execute the given command:
NOTE: In the tail command, tail -2 means that it will print the last two output lines. But if you want to print the detailed output with all the logs, you don’t need to specify the number of lines.
Specify Log Size:
Some loglines can be long strings and limit them to use “–size” option. Run the mentioned “–size” option in the following way:
(In the above command, we added random characters in the log and displayed the only first 12 characters using the size option. Tail -1 will print only the last line of the display result).
Ignore Empty Lines:
Use the “-e” option if the file contains empty lines in it. It will remove the blank lines from the file and print the output in the standard way.
For example, add some blank lines in the text file we created:
Run the “-e” option with the file name “test_file1.txt” to remove empty lines:
Display Help:
Type the “–help” option to display the help message about the “logger” command and its options:
Conclusion:
The “syslog” file in every system keeps a record of each action performed by the operating system. There is a “logger” command in the Linux systems that provides an interface to the user to add logs in the “/var/log/syslog” file using the terminal.
In this writing, we have discussed the Linux “logger” command and learned the functionality of its different options through multiple examples.
About the author
Wardah Batool
I am a Software Engineer Graduate and Self Motivated Linux writer. I also love to read latest Linux books. Moreover, in my free time, i love to read books on Personal development.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago .
Can I get the specified number lines at a time by using the less command? I want just ex. show 20 lines even if my screen allows more.
3 Answers 3
less works with screens of text. The “screen” is the full size of the terminal.
less –window=n can tell less to only use so many rows at a time. That being said the option is not always available.
If you only want “some” output try tail -n 20 /file.txt for the last 20 lines, or I personally use head -n 20 | tail -n 10 to get the middle 10 lines.
Display a file from line number X:
Use the -N option to output line numbers
displays from line number 15000 with line numbers displayed
from less manual, to scroll n lines at a time, but shows a whole screen-full.
Changes the default scrolling window size to n lines. The default is one screenful. The z and w commands can also be used to change the window size. The “z” may be omitted for compatibility with some versions of more. If the number n is negative, it indicates n lines less than the current screen size. For example, if the screen is 24 lines, -z-4 sets the scrolling window to 20 lines. If the screen is resized to 40 lines, the scrolling window automatically changes to 36 lines.
In this article, we will learn how to use sed command in linux with 12 practical examples. The sed command is a powerful and useful tool in Unix / Linux for editing the content (files) line by line, including inserts, appends, changes, and deletes. Furthermore, it supports regular expressions, so it can match complex patterns. Commonly it is used to find and replace the strings in files like configuration files, bash scripts, SQL files, etc.
The sed commands are mostly abstracted from the ‘ed’ text editor. The sed command allows us to quickly remove or replace the content without having to open a file. “For editing purposes, we have several text editors available in Linux, such as vi, emacs, vim, and jed. However, the “sed” utility will function with no limitations on any standard Unix or Linux terminal. Once you understand how syntax patterns work, it is pretty easy to use sed in Linux. This is why most experienced Linux users use the sed command since it allows them to perform powerful tasks, like substitute, insert, or delete text in a file or stream programmatically.
This guide will walk you through the most common 12 sed command in linux examples. If you are just getting started with scripting (bash), the sed utility is essential. All examples under this section have been tested on RHEL, CentOS-Stream, and Rocky Linux.
The Global Syntax for sed command in Linux is as follows (there are two forms):
sed [option] ‘command’ [input-file]
– The 1st syntax executes the command from the script file.
– The 2nd syntax executes the command directly from the Terminal/Command line.
* There is not much difference between these two syntax.
An overview of the most commonly used options, flags, and special characters for Linux’s sed command, is provided in the following table.
To demonstrate all the following examples, let me first create a sample file that can be used throughout this session.
We have several options for creating files in Linux. I’ll be using the ‘cat’ command here. This command is one of the most widely used in a standard Unix/Linux application. For more information about cat commands and options, click here
The below substitution command is divided into five parts:
s ==> It specifies the substitution command
/ ==> It specifies the delimiters
Linux ==> It specifies the search pattern (Regular expression)
Unix ==> It specifies the replacement string.
Converting text between uppercase and lowercase can be very tedious, especially when you want to avoid inadvertent misspellings. Fortunately, Linux provides a handful of commands that can make the job very easy.
There are many ways to change text on the Linux command line from lowercase to uppercase and vice versa. In fact, you have an impressive set of commands to choose from. This post examines some of the best commands for the job and how you can get them to do just what you want.
Using tr
The tr (translate) command is one of the easiest to use on the command line or within a script. If you have a string that you want to be sure is in uppercase, you just pass it through a tr command like this:
Below is an example of using this kind of command in a script when you want to be sure that all of the text that is added to a file is in uppercase for consistency:
Switching the order to [:upper:] [:lower:] would have the opposite effect, putting all the department names in lowercase:
Similarly, you could use the sed command’s A-Z and a-z strings to accomplish the same thing:
As you undoubtedly suspect, reversing the order of the a-z and A-Z strings will have the opposite effect, turning the text to all lowercase.
Using awk
The awk command lets you do the same thing with its toupper and tolower options. The command in the script shown in the previous example could be done this way instead:
The reverse (switching to lowercase) would look like this:
Using sed
The sed (stream editor) command also does a great job of switching between upper- and lowercase. This command would have the same effect as the first of the two shown above.
Switching from uppercase to lowercase would simply involve replacing the U near the end of the line with an L.
Manipulating text in a file
Both awk and sed also allow you to change the case of text for entire files. So, you just found out your boss wanted those department names in all lowercase? No problem. Just run a command like this with the file name provided:
If you want to overwrite the depts file, instead of just displaying its contents in lowercase, you would need to do something like this:
Making the change with sed, however, you can avoid that last step because sed can edit a file “in place” as shown here, leaving the file intact, but the text in all lowercase:
Capitalizing first letters only
To capitalize only the first letters of words in a string, you can do something like this:
That command will ensure that first letters are capitalized, but won’t change the rest of the letters.
Making sure only first letters are uppercase
It’s a little more challenging when you want to change text so that only first letters are in uppercase. Say you’re manipulating a list of staff members’ names and you want them to be formatted in the normal Firstname Lastname manner.
with sed
You could use a considerably more complex sed command to ensure this result:
with python
If you have python loaded, you can run a command like this that also formats text so that only the first letters of each word are capitalized and the command may be a little easier to parse than the sed command shown above:
There are many ways to change the formatting of text between upper- and lowercase. Which works best depends in part of whether you’re manipulating a single string or an entire file and how you want the end result to look.
Sandra Henry-Stocker has been administering Unix systems for more than 30 years. She describes herself as “USL” (Unix as a second language) but remembers enough English to write books and buy groceries. She lives in the mountains in Virginia where, when not working with or writing about Unix, she’s chasing the bears away from her bird feeders.
Printing lines that contain a particular string at the beginning is quite annoying to manually deal with, so we can make use of bash to design a shell script. The most versatile and powerful tool for searching patterns or lines is by using grep, awk, or sed to find the most efficient solution to the problem.
The first thing to get started with is the file in which the string is to be searched and also the string. Both of the parameters will be inputted from the user. After that, we can use any one of the three tools grep, sed or awk. The concept is the same, we need to find the string only at the beginning of a line using regex and print the lines depending on the tool to be used.
User input
We will input the file name and string from the user. We will use the read command and pass in the -p argument to prompt the user a text to display what he/she should input. We will store the input in the appropriate variable names.
read -p “Enter the file name : ” file
read -p “Enter the string to search for in the file : ” str
Searching and Printing Lines
The following are the three tools used to perform the operation of finding and printing lines that have a string input from the user. Anyone can be used as per user choice and requirements.
Method 1: Using GREP Command
The grep command is quite useful for finding patterns in the file or strings in the line. We will be using the simple grep command that searches for the word at the beginning of the file using the ^ operator and prints the line from the file. This is quite straightforward to understand if you are familiar with the grep command in Linux. The grep command will simply search for the input string/word from the file and print the line in which the string occurs only at the beginning. The last argument, which is “grep -v grep || true” only returns 0 if the grep didn’t find any match. This will not show the annoying error of shell return 1, it will not print anything and the user will understand that no line started with that string in the file.
using the grep command to search for strings at the start of lines.
The grep command is also a bit flexible as it allows us to print the line numbers as well. We can print the line numbers by adding some arguments to the above command. The -n argument after grep will print the line number in which the string is found in the file.
using the grep command to search for strings at the start of lines and print line numbers.
Both the cases in grep will be case-sensitive. To make the search case insensitive, you can add in the argument -i just after the grep like,
using grep command to search case insensitive strings at the start of lines.
Method 2: Using SED Command
The sed command is different than the grep command as it is a stream editor and not a command to just pass in arguments. We need to use -n so that the command doesn’t print everything from the file provided in the last argument of the command below. The following regex will search for the string variable and print all the matched lines. And the result will be compact output. The regex used is simple and similar to grep as it matches the string only at the beginning of the line using the ^ operator and backslashes mean the expression is a regex. The p argument will print the lines which match the regex. Both the cases in grep will be case-sensitive.
using sed command to search for strings at the start of lines.
The gnu sed doesn’t support case-insensitive searches or matches. Still, you can use Perl to add case-insensitive matches.
Method 3: Using AWK Command
The regex for the AWK command is also the same and does the same thing in its own command style. The regex is almost similar to the sed command, which searches for the string variable, in the beginning, using the ^ operator and then simply prints the matched lines from the command. Both the cases in grep will be case-sensitive.
using the awk command to search for strings at the start of lines.
To make the search in awk case insensitive, you can make some changes to the same command.
using the awk command to search for case insensitive strings at the start of lines.
The awk command has some inbuilt functions which set the arguments as default to case sensitive, we can change the properties of the awk commands.
I can select all file by going to the 1st line Alt + \ , start marking the text by Alt + A , go to the last line by Alt + /
But there, I don’t know which key to remove the selected text. Hit delete doesn’t work for me but Ctrl + K to cut the text will destroy my clipboard.
So, what is the hotkey to delete selected text?
13 Answers 13
nano of course can delete blocks, see this article
- use CTRL + Shift + 6 to mark the beginning of your block
- move cursor with arrow keys to end of your block, the text will be highlighted.
- use CTRL + K to cut/delete block.
To paste the block to another place, move cursor to the position and the use CTRL + U . You can paste the block as often as you want to.
nano does not support deleting a block of text, only cutting it (to the server’s clipboard).
Instead, if you are using Putty, do the following:
Select the text you wish to copy to the clipboard with the mouse first — this copies it to your local clipboard (i.e. Windows 7 clipboard), which nano can’t touch:
Then, select your block in nano and use Ctrl-K to delete it.
Finally, move your cursor to the position where you want to insert the text you copied in Step 1 (you can close nano, open another file, etc. too as long as you don’t select another block of text with the mouse). Right-click to paste the copied text at the cursor position.
deletes current line, It can also be useful for quick editing. Thanks
In some of the putty streams, the following works too
If you are trying to empty all of the lines.
A pretty elegant and simplest approach is from the bash-cli:
- echo “” > filename.txt
- nano filename.txt
Sadly, nano doesn’t seem to have any way of bulk-deleting without clobbering the clipboard.
The safest thing to do while staying within the document is probably to paste your clipboard before deleting, then re-cut it again afterwards.
If you have a block of text already selected, then Ctrl + U will paste the clipboard text, including it in your selected block. You can then unmark the pasted text and just cut your originally selected block.
These steps don’t preserve your clipboard, exactly, but at least effectively perform a swap between your selection and the clipboard, allowing you to re-cut the lines you had in there before.
Updated April 23, 2022
In this tutorial, we will learn-
What is a Pipe in Linux?
The Pipe is a command in Linux that lets you use two or more commands such that output of one command serves as input to the next. In short, the output of each process directly as input to the next one like a pipeline. The symbol ‘|’ denotes a pipe.
Pipes help you mash-up two or more commands at the same time and run them consecutively. You can use powerful commands which can perform complex tasks in a jiffy.
Let us understand this with an example.
When you use ‘cat’ command to view a file which spans multiple pages, the prompt quickly jumps to the last page of the file, and you do not see the content in the middle.
To avoid this, you can pipe the output of the ‘cat’ command to ‘less’ which will show you only one scroll length of content at a time.
An illustration would make it clear.
Click here if the video is not accessible
‘pg’ and ‘more’ commands
Instead of ‘less’, you can also use.
And, you can view the file in digestible bits and scroll down by simply hitting the enter key.
The ‘grep’ command
Suppose you want to search a particular information the postal code from a text file.
You may manually skim the content yourself to trace the information. A better option is to use the grep command. It will scan the document for the desired information and present the result in a format you want.
Syntax:
Let’s see it in action –
Here, grep command has searched the file ‘sample’, for the string ‘Apple’ and ‘Eat’.
Following options can be used with this command.
| Option | Function |
|---|---|
| -v | Shows all the lines that do not match the searched string |
| -c | Displays only the count of matching lines |
| -n | Shows the matching line and its number |
| -i | Match both (upper and lower) case |
| -l | Shows just the name of the file with the string |
Let us try the first option ‘-i’ on the same file use above –
Using the ‘i’ option grep has filtered the string ‘a’ (case-insensitive) from the all the lines.
The ‘sort’ command
This command helps in sorting out the contents of a file alphabetically.
The syntax for this command is:
Consider the contents of a file.
Using the sort command
There are extensions to this command as well, and they are listed below.
| Option | Function |
|---|---|
| -r | Reverses sorting |
| -n | Sorts numerically |
| -f | Case insensitive sorting |
The example below shows reverse sorting of the contents in file ‘abc’.
.png)
What is a Filter?
Linux has a lot of filter commands like awk, grep, sed, spell, and wc. A filter takes input from one command, does some processing, and gives output.
When you pipe two commands, the “filtered ” output of the first command is given to the next.
Let’s understand this with the help of an example.
We have the following file ‘sample’
We want to highlight only the lines that do not contain the character ‘a’, but the result should be in reverse order.
For this, the following syntax can be used.
Basic LINUX commands
- Get link
- Other Apps
When you hear of Linux, most people think of a complex operating system that is only used by programmers. But it’s not as weird as it sounds.
When running a Linux OS, you need to use a shell—an interface that allows you access to the resources of the operating system. The shell is a program that receives commands from the user and gives them to the OS to process and displays the output. The shell of Linux is its core component. Its distros come from the GUI (Graphical User Interface), but essentially, Linux has a CLI (Command-Line Interface). To open the terminal, press Ctrl+Alt+T in Ubuntu, or press Alt+F2, type gnome-terminal, and press Enter. In Raspberry Pi, type lxterminal.
Shell is the user interface responsible for handling all CLI typed commands. Reads and interprets commands and instructs the operating system to execute tasks as requested. In other words, a shell is a user interface that controls the CLI and functions as a man-in-the-middle interface that links users to the operating system.
And if you’re thinking of using Linux, knowing primary command lines is going a long way.
Here is a list of basic Linux commands:
pwd-Use the pwd command to find the direction of the actual working directory (folder) you are in. The command returns an absolute (full) path, which is essentially the path of all directories beginning with the forward slash (/). The/home/username is an example of an utter road.
cd-Use the cd command to browse through Linux files and folders. It needs either the full path or the directory name, depending on the current working directory you’re in. Let’s assume you’re in/home/username/Documents and you want to go to Images, the Documents subdirectory. To do this, simply type the following command: cd Images. Know, this command is case sensitive, and you have to type the folder name exactly as it is.
ls-Use the “ls” command to know which files are in the directory you are in. You can see all the secret files using the command “ls-a.” If you want to know the contents of other folders, type ls, and then the directory path. For example, enter ls/home/username/Documents to display the contents of the Documents.
mkdir & rmdir-Use the mkdir command to build a new directory—if you type mkdir Music, a directory called Music will be formed. Use this Linux simple command mkdir Music/Newfile to create a new directory within another directory. Use the choice p(parents) to create a guide between two existing directories. For example, a new “2020” file will be generated by mkdir-p Music/2020/Newfile.
rm-Use rm to remove files and folders. Using “rm-r” to uninstall the directory. Deletes both the folder and the files that it holds by using just the rm button.
touch-The touch command is used to construct a file. It may be anything from an empty text file to an open zip file. “Touch new.txt” for example.
locate-You can use this command to locate a file, much like the Windows search command. What’s more, using the-i argument along with this command will make it case-insensitive, so you can scan for a file even if you don’t know its exact name. To check for a file containing two or more words, use an asterisk (*). For example, locate-i school*note will search for any file that contains the names “school” and “note” whether it is a case or case.
man & –help – Use the man button to know more about the command and how to use it. Shows the man pages of the command. For example, “man cd” shows the cd command manual pages. Typing the name of the command and the argument allows you to see how the command can be used (e.g., cd-help).
cp – Use the cp command to copy files from the current directory to another directory.
mv – Use the mv command to transfer files around the command line. We may also use the command mv to rename a file. For example, if we want to change the name of the file “text” to “new,” we can use “mv text new.” It takes the two arguments, much like the command cp.
Here are a few more complicated commands that should prove very useful:
cat – One of the most commonly used commands in Linux is cat – (short for concatenate). Used to list a file’s contents on the standard output. To run this command, type cat, followed by a file name and an extension of that file. Cat file.txt, for example.
diff- The diff command, short for difference, compares the contents of two files line by line. It will output the lines that do not fit after evaluating the files. Programmers also use this order instead of rewriting the entire source code when they need to make program alterations.
job-Job command shows all current employment along with their status. A work is simply a mechanism triggered by the shell.
find – Similar to the locate command, you can also use find to scan for files and folders. The difference is that you use the find command to locate files inside a directory. For example, find/home/-name notes.txt can search for a file called notes.txt in the home directory and its subdirectories.
echo – The “echo” command lets us transfer some data usually text, to a file. For example, if you want to build a new text file or add a text file, you just need to type “echo hello, my name is alok >> new.txt.”
grep – Another simple Linux command that is certainly useful for daily use is grep. It helps you to browse through all the text in a given file.
head – The head command is used to display the first rows of any text file. By default, the first 10 lines will be shown, but you can change that number to your taste.
tail – This one has a similar feature to the head command, but instead of displaying the first lines, the tail command shows the last ten lines of the text file.
ping – Using ping to verify your link to your server. Ping is a computer network management software utility used to measure the accessibility of an Internet Protocol (IP) network host.
kill -If you have a non-responsive program, you can terminate it manually by using the kill button. Sends an essential signal to the misbehaviour app and instructs the app to release itself. There are a total of sixty-four signals that you can use but people typically only use two signals: SIGTERM (15)-requests a program to stop running and give it some time to save all of its development. If you do not mention a signal when entering the kill command, this signal will be used. There’s also SIGKILL (9)-forces programs to stop immediately. Unsaved progress is going to be lost.
Basic Linux commands allow users to quickly and easily perform tasks. It might take a while to learn some of the simple commands, but with plenty of practice, nothing is impossible. In the end, it would undoubtedly be helpful for you to learn and master these simple Linux commands.
#linux #linux distros #kali linux #linux distributions #arch linux #unix #linux commands #linux memes
SORT command is used to sort a file, arranging the records in a particular order. By default, the sort command sorts file assuming the contents are ASCII. Using options in the sort command can also be used to sort numerically.
- SORT command sorts the contents of a text file, line by line.
- sort is a standard command-line program that prints the lines of its input or concatenation of all files listed in its argument list in sorted order.
- The sort command is a command-line utility for sorting lines of text files. It supports sorting alphabetically, in reverse order, by number, by month, and can also remove duplicates.
- The sort command can also sort by items not at the beginning of the line, ignore case sensitivity, and return whether a file is sorted or not. Sorting is done based on one or more sort keys extracted from each line of input.
- By default, the entire input is taken as the sort key. Blank space is the default field separator.
The sort command follows these features as stated below:
- Lines starting with a number will appear before lines starting with a letter.
- Lines starting with a letter that appears earlier in the alphabet will appear before lines starting with a letter that appears later in the alphabet.
- Lines starting with a uppercase letter will appear before lines starting with the same letter in lowercase.
Examples
Suppose you create a data file with name file.txt:
Sorting a file: Now use the sort command
Syntax :
Note: This command does not actually change the input file, i.e. file.txt.
Sort function with mix file i.e. uppercase and lower case: When we have a mix file with both uppercase and lowercase letters then first the upper case letters would be sorted following with the lower case letters.
Example:
Create a file mix.txt
Now use the sort command
Options with sort function:
1. -o Option: Unix also provides us with special facilities like if you want to write the output to a new file, output.txt, redirects the output like this or you can also use the built-in sort option -o, which allows you to specify an output file.
Using the -o option is functionally the same as redirecting the output to a file.
Note: Neither one has an advantage over the other.
Example: The input file is the same as mentioned above.
Syntax:
2. -r Option: Sorting In Reverse Order: You can perform a reverse-order sort using the -r flag. the -r flag is an option of the sort command which sorts the input file in reverse order i.e. descending order by default.
Example: The input file is the same as mentioned above.
Syntax :
3. -n Option: To sort a file numerically used –n option. -n option is also predefined in Unix as the above options are. This option is used to sort the file with numeric data present inside.
Example :
Let us consider a file with numbers:
Syntax:
4. -nr option: To sort a file with numeric data in reverse order we can use the combination of two options as stated below.
Example: The numeric file is the same as above.
Syntax :
5. -k Option: Unix provides the feature of sorting a table on the basis of any column number by using -k option.
Use the -k option to sort on a certain column. For example, use “-k 2” to sort on the second column.
Example :
Let us create a table with 2 columns
Syntax :
6. -c option: This option is used to check if the file given is already sorted or not & checks if a file is already sorted pass the -c option to sort. This will write to standard output if there are lines that are out of order. The sort tool can be used to understand if this file is sorted and which lines are out of order
Example :
Suppose a file exists with a list of cars called cars.txt.
Syntax :
7. -u option: To sort and remove duplicates pass the -u option to sort. This will write a sorted list to standard output and remove duplicates.
This option is helpful as the duplicates being removed give us a redundant file.
Example: Suppose a file exists with a list of cars called cars.txt.
Syntax :
8. -M Option: To sort by month pass the -M option to sort. This will write a sorted list to standard output ordered by month name.
Example:
Suppose the following file exists and is saved as months.txt
Using The -M option with sort allows us to order this file.
Application and uses of sort command:
- It can sort any type of file be it table file text file numeric file and so on.
- Sorting can be directly implemented from one file to another without the present work being hampered.
- Sorting of table files on the basis of columns has been made way simpler and easier.
- So many options are available for sorting in all possible ways.
- The most beneficial use is that a particular data file can be used many times as no change is made in the input file provided.
- Original data is always safe and not hampered.
I want to loop through the lines of a file with a Bash script and one of the ways to do it is using a for loop.
What is a for loop?
A for loop is one of the most common programming constructs and it’s used to execute a given block of code given a set of items in a list. For instance, let’s say you want to write a program that prints the number of people who live in the 10 biggest european cities. The program can use a for loop to go through each city in the list and print the number of people for that city.
The logic executed is every time the same and the only thing that changes is the city.
Below you can see the generic syntax for a Bash for loop:
LIST can be, for example:
- a range of numbers.
- a sequence of strings separated by spaces.
- the output of a Linux command (e.g. the ls command).
The N commands between do and done are executed for each item in the list.
Table of Contents
For Loop in Bash
In this article you will learn how to use the for loop in Bash and specifically to go through the lines of a file.
But why would you do that? Going through the lines of a file?
For instance, you might need to do that if you have exported data from an application into a file and you want to elaborate that data somehow.
In this example we will use a simple .txt file in which every line contains:
- the name of a city
- the number of people who live in that city.
Below you can see the format of the text file, a colon is used to separate each city from the number of people who live in that city:
So, how can we use a Bash for loop to go through the content of this file?
First we will store the name of the file in a variable
After that, we will use another variable and the cat command to get all the lines in the file:
Here we are using command substitution to assign the output of the cat command to the LINES variables.
Finally the for loop allows to go through each line of the file:
Do and done are used to define the commands to be executed at each iteration of the for loop.
For example, if you have a file with 10 lines the for loop will go through 10 iterations and at each iteration it will read one line of the file.
The echo command can be replaced by any sequence of commands based on what you want to do with each line in the file.
Here is the final script:
And the output of the script is…
We are passing the list to the for loop using the cat command.
This means we can use any commands we want to generate the LIST to be passed to the for loop.
Do you have in mind any other possible commands?
Also, the for loop is not the only option to create a loop in a Bash script, another option is a while loop.
What is a Counter in a Bash For Loop?
In a for loop you can also define a variable called counter. You can use a counter to track each iteration of the loop.
The use of a counter is very common in all programming languages. It can also be used to access the elements of a data structure inside the loop (this is not the case for our example).
Let’s modify the previous program and define a counter whose value is printed at every iteration:
As you can see I have defined a variable called COUNTER outside of the for loop with its initial value set to 0.
Then at each iteration I print the value of the counter together with the line from the file.
After doing that I use the Bash arithmetic operator to increase the value of the variable COUNTER by 1.
And here is the output of the script:
Break and Continue in a Bash For Loop
There are ways to alter the normal flow of a for loop in Bash.
The two statements that allow to do that are break and continue:
- break: interrupts the execution of the for loop and jumps to the first line after for loop.
- continue: jumps to the next iteration of the for loop.
Having defined a counter helps us see what happens when we add break or continue to our existing script.
Let’s start with break…
I will add an if statement based on the value of the counter. The break statement inside the if breaks the execution of the loop if the counter is equal to 3:
And the output is:
As you can see the break statement stops the execution of the for loop before reaching the echo command because COUNTER is 3.
After that, let’s replace break with continue and see what happens. I will leave the rest of the code unchanged.
And here is the output for the script:
Weird…the output is the same. Why?
That’s because when the value of COUNTER is 3 the continue statement jumps to the next iteration of the loop but it doesn’t increment the value of the counter.
So at the next iteration the value of the COUNTER is still 3 and the continue statement is executed again, and so on for all the other iterations.
To fix this we have to increase the value of the COUNTER variable inside the if statement:
This time we see the correct output:
As you can see “Counter 3: ….” is not printed in the terminal.
Writing a For Loop in One Line
Before finishing this tutorial, let’s see how we can write a for loop in one line.
This is not a suggested practice considering that it makes your code less readable.
But it’s good to know how to write a loop in one line, it gives more depth to your Bash knowledge.
The generic syntax for a Bash for loop in one line is the following:
Let’s print the content of our text file with a one line for loop:
To simplify things I have removed the COUNTER and the if statement. If they were there the one line for loop would be a lot harder to read.
Try to stay away from one-liners if they make your code hard to read.
Conclusion
In conclusion, in this tutorial you have learned how to:
- Store the lines of a file in a variable
- Use a for loop to go through each line.
- Use a counter in a for loop.
- Change the flow of a loop with break and continue.
- Write a for loop in one line.
How are you going to use this?
If you want to learn more about loops in Bash scripting have a look at this tutorial.
Related FREE Course: Decipher Bash Scripting
Global
- :h[elp] keyword – open help for keyword
- :sav[eas] file – save file as
- :clo[se] – close current pane
- :ter[minal] – open a terminal window
- K – open man page for word under the cursor
Cursor movement
- h – move cursor left
- j – move cursor down
- k – move cursor up
- l – move cursor right
- gj – move cursor down (multi-line text)
- gk – move cursor up (multi-line text)
- H – move to top of screen
- M – move to middle of screen
- L – move to bottom of screen
- w – jump forwards to the start of a word
- W – jump forwards to the start of a word (words can contain punctuation)
- e – jump forwards to the end of a word
- E – jump forwards to the end of a word (words can contain punctuation)
- b – jump backwards to the start of a word
- B – jump backwards to the start of a word (words can contain punctuation)
- ge – jump backwards to the end of a word
- gE – jump backwards to the end of a word (words can contain punctuation)
- % – move to matching character (default supported pairs: ‘()’, ‘‘, ‘[]’ – use :h matchpairs in vim for more info)
- 0 – jump to the start of the line
- ^ – jump to the first non-blank character of the line
- $ – jump to the end of the line
- g_ – jump to the last non-blank character of the line
- gg – go to the first line of the document
- G – go to the last line of the document
- 5gg or 5G – go to line 5
- gd – move to local declaration
- gD – move to global declaration
- fx – jump to next occurrence of character x
- tx – jump to before next occurrence of character x
- Fx – jump to the previous occurrence of character x
- Tx – jump to after previous occurrence of character x
- ; – repeat previous f, t, F or T movement
- , – repeat previous f, t, F or T movement, backwards
- > – jump to next paragraph (or function/block, when editing code)
- zz – center cursor on screen
- Ctrl + e – move screen down one line (without moving cursor)
- Ctrl + y – move screen up one line (without moving cursor)
- Ctrl + b – move back one full screen
- Ctrl + f – move forward one full screen
- Ctrl + d – move forward 1/2 a screen
- Ctrl + u – move back 1/2 a screen
Insert mode – inserting/appending text
- i – insert before the cursor
- I – insert at the beginning of the line
- a – insert (append) after the cursor
- A – insert (append) at the end of the line
- o – append (open) a new line below the current line
- O – append (open) a new line above the current line
- ea – insert (append) at the end of the word
- Ctrl + h – delete the character before the cursor during insert mode
- Ctrl + w – delete word before the cursor during insert mode
- Ctrl + j – begin new line during insert mode
- Ctrl + t – indent (move right) line one shiftwidth during insert mode
- Ctrl + d – de-indent (move left) line one shiftwidth during insert mode
- Ctrl + n – insert (auto-complete) next match before the cursor during insert mode
- Ctrl + p – insert (auto-complete) previous match before the cursor during insert mode
- Ctrl + rx – insert the contents of register x
- Ctrl + ox – Temporarily enter normal mode to issue one normal-mode command x.
- Esc – exit insert mode
Editing
- r – replace a single character.
- R – replace more than one character, until ESC is pressed.
- J – join line below to the current one with one space in between
- gJ – join line below to the current one without space in between
- gwip – reflow paragraph
- g
– switch case up to motion
Marking text (visual mode)
- v – start visual mode, mark lines, then do a command (like y-yank)
- V – start linewise visual mode
- o – move to other end of marked area
- Ctrl + v – start visual block mode
- O – move to other corner of block
- aw – mark a word
- ab – a block with ()
- aB – a block with
- at – a block with tags
- ib – inner block with ()
- iB – inner block with
- it – inner block with tags
- Esc – exit visual mode
Visual commands
- > – shift text right
- – shift text left
- y – yank (copy) marked text
- d – delete marked text
– switch case
Registers
- :reg[isters] – show registers content
- “xy – yank into register x
- “xp – paste contents of register x
- “+y – yank into the system clipboard register
- “+p – paste from the system clipboard register
/.viminfo, and will be loaded again on next restart of vim.
0 – last yank
” – unnamed register, last delete or yank
% – current file name
# – alternate file name
* – clipboard contents (X11 primary)
+ – clipboard contents (X11 clipboard)
/ – last search pattern
: – last command-line
. – last inserted text
– – last small (less than a line) delete
= – expression register
_ – black hole register
If you have the Cornell-Ithaca NetID but are currently not on campus, launch the VPN connection on your local machine (laptop) using the CIT-provided Cisco AnyConnect Secure Mobility Client . This will make your laptop effectively a part of Ithaca campus network.
If you have a Windows laptop
If not yet done, download the PuTTy ssh client:
sgtatham/putty/latest/w32/putty.exe . Save the exe file anywhere on your laptop (e.g., on the Desktop for access).
Double-click on the PuTTy icon. In the ‘ Host Name ‘ field, enter the full name of your assigned machine (e.g., cbsum1c1b002.biohpc.cornell.edu ). Make sure that ‘ Port ‘ is set to ‘ 22 ‘ and ‘ Connection type ‘ to ‘ ssh ‘. Click ‘ Open ‘. A terminal window will open with the login prompt. At the prompt, type your BioHPC user ID and hit ENTER. Then enter your BioHPC password and hit ENTER (NOTE: as you type the password – nothing will be happening on the screen – this is on purpose).
Since you will be accessing your assigned machine often during the workshop, it makes sense to create and save a customized profile for it in PuTTy. To do this, open the PuTTy client and enter the full name of the workstation in the ‘ Host Name ‘ field and make sure the ‘ Connection Type’ is ‘ ssh ‘ and ‘ Port ‘ is ‘ 22 ‘. Then under ‘Saved Session’, enter a short nickname for the machine ( e.g ., the first part of the name, like cbsum1c1b002 ). Expand the ‘ SSH ‘ tab in the left panel and click ‘ X11 ‘ in the left panel, check the box ‘ Enable X11 forwarding ‘. If you prefer the black text on white background, you can change the color settings. Click ‘ Colours ‘ in the left panel, set ‘ Default Foreground ‘ and ‘ Default Bold Foreground ‘ to ‘ 0 0 0 ‘, ‘ Default Background ‘ and ‘ Default Bold Background ‘ to ‘ 255 255 255 ‘. Once the customization is complete, click ‘ Session ‘ in the left panel, and then click ‘ Save ‘. This will save the machine’s profile under a nickname you specified, and it will appear on the list of saved profiles. To connect to a machine with the saved profile, just double-click on the nickname displayed in the ‘ Saved Sessions ‘ section.
If you have a Mac (or Linux) laptop
Launch the terminal window. Type (replacing cbsum1c1b002 with the name of your assigned machine and your_id with your own BioHPC user ID)
Here you will find out:
- when a bash for looping through lines in file is used
- how to use a bash to loop through lines
- when DiskInternals can help you
Are you ready? Let’s read!
When you can use a bash for looping through lines in file
With the loop, you can repeat entire sets of commands an unlimited number of times as you wish. To do this, do not look for complex programming languages, just use a simple bash script, so you can use a bash for loop through lines in a file.
The syntax of reading file line by line
Learn the syntax for bash:
Note: -r – Does not allow backslash interpretation.
You can also enter the IFS = option before the read command. This solution will not trim the leading and trailing spaces.
Here’s what it looks like:
Examples of bash for looping lines
Pay attention to this simple example:
The file will be read line by line. The code will assign each line to a variable, and then simply output it.
With each line, you can also pass more than one variable to the read command: the first field is assigned to the first variable, the second to the second variable, etc. If suddenly there are more fields than variables, all the “extra” ones will refer to the last variable.
Open Linux files in Windows
This article is about DiskInternals Linux Reader. This application will be your best friend if you use a virtual machine or a dual-boot installation, and you need to get files from Linux to Windows. This application can be said to be unique, moreover, it can be used for free. You will not see ridiculous ads, you will have no restrictions; you immediately download the full version of the application and start transferring data from one OS to another. Everything is transparent and clear. The program reads files from all types of hard drives, including SSD, HDD, flash drives, memory cards and others. Also, with this product, you can seamlessly create disk images and enjoy data security.
Instructions for reading Linux files on Windows
These recommendations are suitable for using DiskInternals Linux Reader; carefully study them before starting the file migration.
Naturally, initially you will need to download and install the application.
Then select the storage from which the data will be transferred.
Once the data is displayed on the monitor screen, view it using the View function.
After that, safely save data from Linux to Windows.
As you can see, you can’t imagine a simpler data transfer process!
In C++, you can exit a program in these ways:
- Call the exit function.
- Call the abort function.
- Execute a return statement from main .
exit function
The exit function, declared in , terminates a C++ program. The value supplied as an argument to exit is returned to the operating system as the program’s return code or exit code. By convention, a return code of zero means that the program completed successfully. You can use the constants EXIT_FAILURE and EXIT_SUCCESS , also defined in , to indicate success or failure of your program.
Issuing a return statement from the main function is equivalent to calling the exit function with the return value as its argument.
abort function
The abort function, also declared in the standard include file , terminates a C++ program. The difference between exit and abort is that exit allows the C++ run-time termination processing to take place (global object destructors get called), but abort terminates the program immediately. The abort function bypasses the normal destruction process for initialized global static objects. It also bypasses any special processing that was specified using the atexit function.
atexit function
Use the atexit function to specify actions that execute before the program terminates. No global static objects initialized before the call to atexit are destroyed before execution of the exit-processing function.
return statement in main
Issuing a return statement from main is functionally equivalent to calling the exit function. Consider the following example:
The exit and return statements in the preceding example are functionally identical. Normally, C++ requires that functions that have return types other than void return a value. The main function is an exception; it can end without a return statement. In that case, it returns an implementation-specific value to the invoking process. The return statement allows you to specify a return value from main .
Destruction of static objects
When you call exit or execute a return statement from main , static objects are destroyed in the reverse order of their initialization (after the call to atexit if one exists). The following example shows how such initialization and cleanup works.
Example
In the following example, the static objects sd1 and sd2 are created and initialized before entry to main . After this program terminates using the return statement, first sd2 is destroyed and then sd1 . The destructor for the ShowData class closes the files associated with these static objects.
Another way to write this code is to declare the ShowData objects with block scope, allowing them to be destroyed when they go out of scope: