ALL SIR NOTES HAVE BEEN UPLOADED HERE.
Subject Teacher: Bhagirath Singh Chauhan # 9829275869 Page No.: 1
As is already seen, a computer cannot do anything on its own. It has to be provided a detailed set of instructions to perform a specific task to achieve a desired goal, this set of instructions, which are written in the form which is understood by the computer, is known as a computer program.
Software means a set of programs, procedures and other associated documentation which describes what the program does and how it is to be used. Hardware and software both have to work together. A number of software’s can be run on the same hardware to perform different types of jobs. The software acts as an interface between the user and the computer. Software is mainly classified into:
Application Software
System Software
Application Software: Application software is a program or a set of programs which are written to carry out a specific application.eg. Payroll, financial accounting etc. Nowadays special application software or packages for specialized areas like drawing, engineering, manufacturing, banking, and publishing are available. The set of programs which together make an application package are called application programs.
System Software: The systems software controls the working of the computer system. It helps the user to use the computer by allowing him to communicate with the system. System software controls the working of other software’s, hardware, hardware devices like printers, memory, CPU etc. Thus, they make the operation of the computer more efficient. The programs included in system software are systems programs. Without the systems programs, it would not be possible for the application programs to work on the computer.
Utility Software: These are a set of programs or tools which are used in program development or for performing limited tasks, eg. Scandisk.
Firmware
With the advances in technology it is now possible to make the software available on ROM (Read Only Memory) chips. These chips, which form a part of the software, have the programs in them. Thus programs available on hardware are called Firmware. Today not only systems software, but even some dedicated application programs are being made available on firmware.
There are a number of high level functions which are required to be performed by the computer system. Such functions are performed by writing special programs called micro programs. Micro programs generally execute the low level machine functions. These programs are mainly used as a substitute for hardware. Such programs can be stored on ROMs and be used again and again.
A programming language is a means of communication for the user to communicate with the computer system. The programming language tells the computer what to do. Types of programming languages are:
Machine Language (1st Generation Language)
Assembly Language (2nd Generation Language)
High Level Language (3rd Generation Language)
Machine Language (1st Generation Language)
This is the only language which is understood by the computer. This is the language nearest to the machine. In this language the programs are written in binary code i.e. the instructions are made only by a combination of binary digits 0 and 1. Machine language may vary from machine to machine depending upon the computer architecture. Machine languages execute the fastest since they are immediately understood by the computer. No translation of the programs is required. It is very difficult to program,It is also very tedious and time consuming, since all the instructions have to represent as a series of 0s and 1 s. Therefore there is always a possibility of errors.
Assembly Language (2nd Generation Language)
The 0s and 1s of the machine language were substituted by letters and symbols in assembly languages. The assembly languages use mnemonics (memory aid) in place of operation codes. The language uses symbols instead of numbers to write programs. For example, in assembly language, the mnemonic add typically means to add numbers, mul typically means to multiply numbers, and mov typically means to move a value to a location in memory. Since the machine language and assembly language both are dependent on the hardware, they are referred to as low level programming languages.
High Level Languages (3rd Generation Language)
A high-level language allows you to create powerful and complex programs without knowing how the CPU works, and without writing large numbers of low-level instructions. Higher level languages make use of English like words and statements and mathematical symbols for instructions. Higher level languages make programming easier, since they are relatively easy to learn. Less time is required to write programs in high level languages. The programmer is not required to know the detailed working of the computer system in order to program in a high level language. They are machine independent.
However a high level language is not directly understood by the computer. It is required to be translated into machine language. Therefore they generally execute more slowly and require more memory than the same program written in assembly language.
Some of the popular high level languages are:
Translator
The programs which are used to translate programs written in high level language into machine language are known as Language Translators. The types of translators are:
Assembler
Compiler
Interpreter
Assembler: The assembler is a system program which is supplied by the manufacturer. It converts the assembly program into a machine readable program and the resulting program is called the object program. The assembler translates each assembly language instruction into a corresponding machine code.
Compiler: The compiler translates the entire source program into machine language program at once. The source code remains intact. Once a program is compiled it can be run as many times as required, without being required to be recompiled. A compiler checks for errors like illegal symbols, statements etc. during compilation and gives out a list of error messages at the end of execution. However, the compiler is incapable of detecting any logical errors in the program.
Interpreter: The interpreter is the program which translates a high level language program into machine language as follows:
It takes one statement from the high level language program
Translates it into a machine instruction and the instruction is immediately executed. Since the program is translated statement by statement, the machine level program of the source program is not stored anywhere in memory.
Difference between Compiler and Interpreter
S.No. Interpreter Compiler 1
The interpreter translate the program written in high level language into machine language at the time of executing that program, instructions by instructions. It reads the first instruction written in the program and converts that into equivalent machine language instructions. Then the CPU executes those machine language instructions. After that, the interpreter reads and translates the next instruction and so on.
Compiler translates the entire high level language program into the machine language program at once before executing it. Therefore normally compiled programs run faster than interpreted programs. The compiled program i.e. machine language program generated by the compiler after translation is called object program. 2
No object code is saved for future use
Object code is permanently saved for future use. 3
Time consuming translation method.
Non time consuming translation method. 4
Interpreter are easy to write and do not require large memory space.
It requires large space in the computer. 5
Any change in source program during the translation does not requires retranslation of entire code.
Any change in source program after the compilation requires recompiling of entire code. 6
Interpreter itself is responsible for the execution of the program.
No involvement of compiler during execution of the program.
PCE Programming for Problem Solving (Unit-I) 1FY3-06
Subject Teacher: Bhagirath Singh Chauhan # 9829275869 Page No.: 4
Difference between High Level and Low Level Languages
S.No. High Level Language Low Level Language 1
It is programmer friendly language.
It is a machine friendly language. 2
Less memory efficient.
High memory efficient. 3
Easy to understand.
It is tough to understand. 4
It is simple to debug.
It is comparatively complex to debug. 5
It is portable.
It is non-portable. 6
It can run on any platform.
It is machine dependent. 7
It needs compiler or interpreter for translation.
There is no need of translator.
Fourth Generation Languages
A number of software vendors are producing a variety of application development tools that may offer further improvement in productivity, power, robustness and these tools are often collectively referred as fourth generation languages. These language are also called as non-procedural languages as they are designed for the user that matches application procedure rather than the algorithmic procedure to achieve the desired results. These 4GL product mainly consists of:
Presentation language example report generator, query language.
Special language example, language used in database and spreadsheet.
Fifth Generation Languages
These are generally called as Natural languages, because of their resemblance to the natural spoken English language. These languages are still in the development stages.
Types or Errors (Bugs)
Bug is error in a program and debugging is called error removing technique. There are three types of errors in languages:
1. Syntax error
2. Logical error
3. Execution error
Program Designing
The software developer makes use of tools like algorithms and flowcharts to develop the design of the program.
Algorithm
Algorithm is a step by step problem solving procedure. An algorithm represents the logic of the processing to be performed. It is a sequence of instructions which are designed in such a way that if they are executed in the specified sequence, the desired goal is achieved. In an algorithm,
Each and every instruction has to be precise and clear.
The instruction has to be executed in a finite time.
When the algorithm terminates the desired result should be achieved.
Example: calculate and print the sum of two numbers.
step 1: Start
step 2: Read A, B
step 3: Set SUM:= A + B
step 4: Print SUM
step 5: Stop
Flowchart
A flowchart is a pictorial representation of the algorithm. It represents the steps involved in the procedure and shows the logical sequence of processing using boxes of different shapes. The instruction to be executed is mentioned in the boxes. These boxes are connected together by solid lines with arrows, which indicate the flow of operation.
Start/Stop
Input/Output
Assignment/Calculation/Process
Decision Box/Condition
Connector
Flow Lines
Pseudocode
Sometimes, it is desirable to translate an algorithm to an intermediate form, between that of a flowchart and the source code. Pseudocode is an English approximation of source code that follows the rules, style, and format of a language but ignores most punctuations. A pseudo code, pseudo meaning half, will be a representation that almost looks like a code written in a programming language. However, it should be noted that a good pseudo code should be language independent.
Example: Print whether the number is ‘odd’ or ‘even’.
Algorithm:
Step 1: Input num
Step 2: If num % 2 = 0, then:
Write “Number is Even”
Else:
Write “Number is Odd”
Step 3: Exit
Example: Print whether the number is ‘odd’ or ‘even’.
Algorithm:
Step 1: Input num
Step 2: Set sum := 0
Step 3: Repeat Step 4 to 6 while num > 0
Step 4: rem := num % 10
Step 5: sum := sum + rem
Step 6: num := num / 10
Step 7: Write num
Step 8: Exit
Example: Print whether the number is ‘odd’ or ‘even’.
Flowchart:
START
Input num
False Is True
num % 2 = 0
?
Print “Number is Odd” Print “Number is Even”
STOP
Example: Print the sum of each digit of a number.
Flowchart:
START
Input num
sum = 0
True Is False
num > 0
?
rem = num % 10
sum = sum + rem Print sum
num = num / 10
STOP
Algorithm:
Step 1: Input N
Step 2: Set I := 1 and SUM := 0
Step 3: Repeat step 4 and 5 while I <= N
Step 4: SUM := SUM + I
Step 5: I := I + 1
Step 6: Print SUM
Step 7: Exit
Algorithm:
Step 1: Input N
Step 2: Set I := 2 and FACT := 1
Step 3: Repeat step 4 and 5 while I <= N
Step 4: FACT := FACT * I
Step 5: I := I + 1
Step 6: Print FACT
Step 7: Exit
Example: Print the sum of first n natural numbers.
Flowchart:
START
Input N
I = 1, SUM = 0
True Is False
I <= N
?
SUM = SUM + I
I = I + 1 Print SUM
STOP
Example: Find the factorial of a number.
Flowchart:
START
Input N
I = 2, FACT = 1
True Is False
I <= N
?
FACT = FACT * I
I = I + 1 Print FACT
STOP
Computer codes are used for internal representation of data in computers.
As computers use binary numbers for internal data representation, computer codes use binary coding schemes.
In binary coding, every symbol that appears in the data is represented by a group of bits.
The group of 8 bits used to represent a symbol is called a byte.
Commonly used computer codes are BCD, EBCDIC, and ASCII.
BCD
BCD stands for Binary Coded Decimal.
It is one of the early computer codes.
It uses 6 bits to represent a symbol.
It can represent 64 (26) different characters.
Char BCD Code
Char BCD Code Zone Digit
Zone Digit
A
11
0001
J
10
0001
B
11
0010
K
10
0010
C
11
0011
L
10
0011
D
11
0100
M
10
0100
E
11
0101
N
10
0101
F
11
0110
O
10
0110
G
11
0111
P
10
0111
H
11
1000
Q
10
1000
I
11
1001
R
10
1001
Char BCD Code
Char BCD Code Zone Digit
Zone Digit
S
01
0010
1
00
0001
T
01
0011
2
00
0010
U
01
0100
3
00
0011
V
01
0101
4
00
0100
W
01
0110
5
00
0101
X
01
0111
6
00
0110
Y
01
1000
7
00
0111
Z
01
1001
8
00
1000
9
00
1001
0
00
1010
EBCDIC
EBCDIC stands for Extended Binary Coded Decimal Interchange Code. EBCDIC mainly used on IBM mainframe.
It uses 8 bits to represent a symbol.
It can represent 256 (28) different characters
Char EBCDIC Code
Char EBCDIC Code Digit Zone
Digit Zone
A
1100
0001
J
1101
0001
B
1100
0010
K
1101
0010
C
1100
0011
L
1101
0011
D
1100
0100
M
1101
0100
E
1100
0101
N
1101
0101
F
1100
0110
O
1101
0110
G
1100
0111
P
1101
0111
H
1100
1000
Q
1101
1000
I
1100
1001
R
1101
1001
Char EBCDIC Code
Char EBCDIC Code Digit Zone
Digit Zone
S
1110
0010
0
1111
0000
T
1110
0011
1
1111
0001
U
1110
0100
2
1111
0010
V
1110
0101
3
1111
0011
W
1110
0110
4
1111
0100
X
1110
0111
5
1111
0101
Y
1110
1000
6
1111
0100
Z
1110
1001
7
1111
0101
8
1111
1000
9
1111
1001
ASCII
ASCII stands for American Standard Code for Information Interchange.
ASCII is of two types – ASCII-7 and ASCII-8.
ASCII-7 uses 7 bits to represent a symbol and can represent 128 (27) different characters.
ASCII-8 uses 8 bits to represent a symbol and can represent 256 (28) different characters.
First 128 characters in ASCII-7 and ASCII-8 are same.
Unicode
Provides a consistent way of encoding multilingual plain text.
Defines codes for characters used in all major languages of the world.
Defines codes for special characters, mathematical symbols, technical symbols, and diacritics.
Capacity to encode as many as a million characters.
Assigns each character a unique numeric value and name.
Reserves a part of the code space for private use.
Affords simplicity and consistency of ASCII, even corresponding characters have same code.
Specifies an algorithm for the presentation of text with bi-directional behavior.