Welcome to graduate2professional.blogspot.com

Thursday, December 24, 2009

What is Condition,Decision and How they influence the Coverage?

Condition - A Boolean expression containing no Boolean operators.

Decision - A Boolean expression composed of conditions and zero or more Boolean operators. A decision without a Boolean operator is a condition. If a condition appears more than once in a decision, each occurrence is a distinct condition.

Decision Coverage - Every point of entry and exit in the program has been invoked at least once and every decision in the program has taken on all possible outcomes at least once.

Modified Condition/Decision Coverage - Every point of entry and exit in the program has been invoked at least once, every condition in a decision in the program has taken all possible outcomes at least once, every decision in the program has taken all possible outcomes at least once, and each condition in a decision has been shown to independently affect.


1. Structural coverage guidelines are:
a) Every statement in the program has been invoked at least once;
b) Every point of entry and exit in the program has been invoked at least once;
c) Every control statement (i.e., branchpoint) in the program has taken all possible outcomes (i.e., branches) at least once;
d) Every non-constant Boolean expression in the program has evaluated to both a True and a False result;
e) Every non-constant condition in a Boolean expression in the program has evaluated to both a True and a False result;
f) Every non-constant condition in a Boolean expression in the program has been shown to independently affect that expression's outcome.

2. Based upon these definitions:
• Statement Coverage requires (a) only
• DC requires (b, c, d)
• MC/DC requires (b, c, d, e, f)

Tuesday, December 22, 2009

What is a Thread?

•Technically, a thread is defined as an independent stream of instructions that can be scheduled to run as such by the operating system.

•To the software developer, the concept of a “procedure” that runs independently from its main program may best describes a thread.

•To go one step further, imagine a main program (a.out) that contains a number of procedures. Then imagine all of these procedures being able to be scheduled to run simultaneously and/or independently by the operating system. That would describe a "multi-threaded" program.

•Before understanding a thread, one first needs to understand a UNIX process. A process is created by the operating system, and requires a fair amount of "overhead".

Processes contain information about program resources and program execution state, including:
1.Process ID, process group ID, user ID, and group ID
2.Environment
3.Working directory.
4.Program instructions
5.Registers
6.Stack
7.Heap
8.File descriptors
9.Signal actions
10.Shared libraries
11.Inter-process communication tools (such as message queues, pipes, semaphores, or shared memory).

•Threads use and exist within these process resources, yet are able to be scheduled by the operating system and run as independent entities largely because they duplicate only the bare essential resources that enable them to exist as executable code.

This independent flow of control is accomplished because a thread maintains its own:
1.Stack pointer
2.Registers
3.Scheduling properties (such as policy or priority)
4.Set of pending and blocked signals
5.Thread specific data.

So, in summary, in the UNIX environment a thread:
1.Exists within a process and uses the process resources
2.Has its own independent flow of control as long as its parent process exists and the OS supports it
3.Duplicates only the essential resources it needs to be independently schedulable
4.May share the process resources with other threads that act equally independently (and dependently)
5.Dies if the parent process dies - or something similar
6.Is "lightweight" because most of the overhead has already been accomplished through the creation of its process.

Because threads within the same process share resources:
1.Changes made by one thread to shared system resources (such as closing a file) will be seen by all other threads.
2.Two pointers having the same value point to the same data.
3.Reading and writing to the same memory locations is possible, and therefore requires explicit synchronization by the programmer.

Wednesday, November 11, 2009

HR Questions

1. Do you ever outperform others? Tell me a time that demonstrates this?
2. Do you prefer a slow, medium, fast paced environment and why?
3. Do you ever take on to many tasks?
4. Do people come to you for answers and why?
5. How important is it to you to be the best?
6. Do you prefer to be part of a group or to lead a group and why?
7. What do you look for in a company that you work for?
8. On a scale of 1 to 10, how competitive are you and why?
9. How important is it for you to be 'liked' by your peers?
10.Did anyone approach you if they face any obstacles? When?
11. Explain me about the character of any of your manager?
12.Explain me the best reputation or the best compliments about your work?
13. Have you felt very proud of yourself when you accomplished
anything?when?
14. Rate yourself from 1 - 5 about your honesty.
15. Did anyone feel you are friendly in nature? Explain
16. Is your character the same with everyone or changes from person to
person.
17. What type of characters do you feel to work out with.
18. Are you liked by your colleagues?
19. Do you get your things done because they like you?
20. Whom do you share with if you complete your work?
21. How did you deal with the conflicts?
22. DO you to be busy always are free?
23. How do you manage things on a typical day?
24. What will you do if there are many projects due at a time?
25. Do your colleagues approach you for any help? Example.
26. How do you thank your colleague for their help?
27. Do you feel that you need to be recognized for your work?
28. If there are many distractions beside you while you are working what
will you do?
29. Are you focused? Example.
30. If your voice is ugly in the group what will you do in the discussions?
31. Will you according to the plan or according to the flow.
32. Will you take decisions according to the facts or indigenous?
33. If all the things are equal will you like to be in the group or take the
lead position?
34. Will you like to instruct someone?

Difference between "C structure" and "C++ structure"?

1.The defination of the structure in C is limited to within the module and cannot be initialized outside its scope. Where as in C++ you can initialize the objects anywhere within the boundaries of the project.
2. By default C structure members are Public and C++ structure members are private.
3. C does not support Methods inside structure but C++ does.
4. In C++ structure we can add functions but in C structure we can...t.
5. In C++, structure behaves like class like can add function, and can use properties on class as inheritance, virtual,etc, But in C, structure we can have only data member but not functions.

What is the difference between an object and a class?

Class and Objects are diff but related concepts. Every object belongs to the class and every class has a one or more related objects.

Class is:
1. A Class is static.
2. A class combination of data (called data members) and functions (called member functions).
3. Class is a user defined data type with data members and member functions.
4. Class defines object.
5. One class can define any no of Object.
6. Class is a template (type) or blue print its state how objects should be and behave.

Object is:
1. Object is an instance of a class while a class can have many objects.
2. Object is a dynamic.
3. Objects are the basic runtime entities in an object oriented system.
4. Data and function which combine into a single entity is called object.
5. Objects are instances of classes which identifies the class properties.
6. Object can't define classes.
7. Object can be created and destroyed as your necessity.
8. Object is defined as a s/w construct which binds data and logic or methods (functions) both.

What's the difference between the keywords struct and class?

Class is defined as-
1. Class is a successor of Structure. By default all the members inside the class are private.
2. Class is the advanced and the secured form of structure.
3. Classes are reference typed.
4. Class can be inherited.
5. In Class we can initilase the variable during the declaration.
6. Class can not be declared without a tag at the first time.
7. Class support polymorphism.

Structure defined as :
1. In C++ extended the structure to contain functions also. All declarations inside a structure are by default public.
2. Structures contains only data while class bind both data and member functions.
3. Structure dosen't support the polymorphism, inheritance and initilization.
4. Structure is a collection of the different data type.
5. Structure is ovrloaded.
6. Structures are value type.
7. Structure can be declared without a tag at the first time

Friday, November 6, 2009

Accessing fixed memory locations

Embedded systems are often characterized by requiring the programmer to access a specific memory location. On a certain project it is required to set an integer variable at the absolute address 0x67a9 to the value 0xaa55.
Example:
int *ptr;
ptr = (int *)0x67a9;
*ptr = 0xaa55;

A more obscure approach is:*(int * const)(0x67a9) = 0xaa55;

What is the difference between a string copy (strcpy) and a memory copy (memcpy)? When should each be used?

The strcpy() function is designed to work exclusively with strings. It copies each byte of the source string to the destination string and stops when the terminating null character () has been moved.
On the other hand, the memcpy() function is designed to work with any type of data. Because not all data ends with a null character, you must provide the memcpy() function with the number of bytes you want to copy from the source to the destination.

What is the purpose of using stack?

The stack is the place where all the function`s local (auto) variables are created. The stack also contains some information used to call and return from functions.

A “stack trace” is a list of which functions have been called, based on this information. When you start using a debugger, one of the first things you should learn is how to get a stack trace.
The stack is very inflexible about allocating memory; everything must be deallocated in exactly the reverse order it was allocated in. For implementing function calls, that is all that’s needed. Allocating memory off the stack is extremely efficient. One of the reasons C compilers generate such good code is their heavy use of a simple stack.

There used to be a C function that any programmer could use for allocating memory off the stack. The memory was automatically deallocated when the calling function returned. This was a dangerous function to call; it’s not available anymore.

What is the heap?

The heap is where malloc(), calloc(), and realloc() get memory.
Getting memory from the heap is much slower than getting it from the stack. On the other hand, the heap is much more flexible than the stack. Memory can be allocated at any time and deallocated in any order. Such memory isn’t deallocated automatically; you have to call free().

Recursive data structures are almost always implemented with memory from the heap. Strings often come from there too, especially strings that could be very long at runtime. If you can keep data in a local variable (and allocate it from the stack), your code will run faster than if you put the data on the heap. Sometimes you can use a better algorithm if you use the heap—faster, or more robust, or more flexible. It’s a tradeoff.

If memory is allocated from the heap, it’s available until the program ends. That’s great if you remember to deallocate it when you’re done. If you forget, it’s a problem. A “memory leak” is some allocated memory that’s no longer needed but isn’t deallocated. If you have a memory leak inside a loop, you can use up all the memory on the heap and not be able to get any more. (When that happens, the allocation functions return a null pointer.) In some environments, if a program doesn’t deallocate everything it allocated, memory stays unavailable even after the program ends.

What is the purpose of realloc( )?

The function realloc(ptr,n) uses two arguments.the first argument ptr is a pointer to a block of memory for which the size is to be altered.The second argument n specifies thenew size.

The size may be increased or decreased.
If n is greater than the old size and if sufficient space is not available subsequent to the old region, the function realloc( )may create a new region and all the old data are moved to the new region.

What is the benefit of using const for declaring constants?

The benefit of using the const keyword is that the compiler might be able to make optimizations based on the knowledge that the value of the variable will not change. In addition, the compiler will try to ensure that the values won’t be changed inadvertently.

Of course, the same benefits apply to #defined constants. The reason to use const rather than #define to define a constant is that a const variable can be of any type (such as a struct, which can’t be represented by a #defined constant). Also, because a const variable is a real variable, it has an address that can be used, if needed, and it resides in only one place in memory

Is it better to use a macro or a function?

The answer depends on the situation you are writing code for. Macros have the distinct advantage of being more efficient (and faster) than functions, because their corresponding code is inserted directly into your source code at the point where the macro is called. There is no overhead involved in using a macro like there is in placing a call to a function. However, macros are generally small and cannot handle large, complex coding constructs. A function is more suited for this type of situation. Additionally,macros are expanded inline, which means that the code is replicated for each occurrence of a macro. Your code therefore could be somewhat larger when you use macros than if you were to use functions.

Thus, the choice between using a macro and using a function is one of deciding between the tradeoff of faster program speed versus smaller program size. Generally, you should use macros to replace small, repeatable code sections, and you should use functions for larger coding tasks that might require several lines of code.

What is static memory allocation and dynamic memory allocation?

Static memory allocation:
The compiler allocates the required memory space for a declared variable.By using the address of operator,the reserved address is obtained and this address may be assigned to a pointer variable.Since most of the declared variable have static memory,this way of assigning pointer value to a pointer variable is known as static memory allocation. Memory is assigned during compilation time.

Dynamic memory allocation:
It uses functions such as malloc( ) or calloc( ) to get memory dynamically.If these functions are used to get memory dynamically and the values returned by these functions are assingned to pointer variables, such assignments are known as dynamic memory allocation.memory is assined during run time.

DO-178B FAQS

1.What is DO-178B?
In the avionics industry, software was initially viewed as an inexpensive and more flexible way to extend the functionality of mechanical and analog-electrical systems. However, it was quickly realized that the usual statistical approaches to assess the safety and reliability would not work for flight-critical software. An alternative means of assessment, one that addressed design errors rather than component failure rates, was required. From this need, the first version of DO-178 was created. The current version of the guideline is DO-178B.

2.Which systems need to be certified under DO-178B?
Under the Global Aviation Traffic Management (GATM) agreement, all commercial airborne systems have to comply with Federal Aviation Administration (FAA) regulations for avionics and require DO-178B certification. In addition, all airborne military and space systems must also comply with DO-178B. All retrofits, as well as new airborne system designs, also require DO-178B certification. Note that GATM has international validity and applicability.

3.What are the main goals that are addressed by DO-178B?
• Develop objectives for the life-cycle processes
• Provide a description of the activities and design considerations for achieving those objectives
• Provide a description of the evidence indicating the objectives have been satisfied

4.What is RTCA and what role does it play in DO-178B?
"RTCA" is the Radio Technical Commission for Aeronautics, Inc. ( www.rtca.org). It plays an important role in defining guidelines for various aviation practices. It is not a government agency. The guidelines it produces are sometimes accepted as standards by the FAA ex. DO-178B FAA Advisory Circular AC20-115B establishes DO-178B as the accepted means of certifying all new aviation software.

5.Who are DERs?
DERs, Designated Engineering Representatives, are experienced engineers designated by the FAA to approve engineering data used for certification. All FAA projects must have an FAA representative assigned and a DER to review all submissions.
A DER is an independent specialist designated by the FAA as having authority to sign off on your project as a representative of the FAA.
A DER will eventually examine your documentation. It is good practice to get a DER involved at an early stage in your development. The DER may insist on witnessing portions of your software testing. A DER may insist on changes to documentation before signoff.

6.What do the DO-178B levels mean?
DO-178B software levels (A, B, etc.) are based on the potential of the software to cause safety-related failures identified in the system safety assessment. DO-178B has five levels of certification:
Level A: Software whose failure would cause or contribute to a catastrophic failure of the aircraft.
Level B: Software whose failure would cause or contribute to a hazardous/severe failure condition.
Level C: Software whose failure would cause or contribute to a major failure condition.
Level D: Software whose failure would cause or contribute to a minor failure condition.
Level E: Software whose failure would have no effect on the aircraft or on pilot workload.
Who determines which DO-178B level is required?
The level to which a particular system must be certified is selected by a process of failure analysis and input from the device manufacturers and the certifying authority (FAA or JAA), with the final decision made by the certifying authority.
Certification at any level automatically covers the lower-level requirement. Software certified at Level A can be used in any avionics application.

7.What levels of structural testing are required by DO-178B?
Three primary levels of structural testing concern most DO-178B projects:
SC: Statement Coverage. Means that every statement in the program has been invoked or used at least once. This is the most common use of the term code coverage}.
DC: Decision Coverage. Means that every point of entry and exit in the program has been invoked at least once and that each decision in the program has been taken on all possible (Boolean) outcomes at least once. Essentially, this means that every Boolean statement has been evaluated both TRUE and FALSE.
MCDC: Modified Condition Decision Coverage. Means that every point of entry and exit in the program has been invoked at least once, that every decision in the program has taken all possible outcomes at least once, and that each condition in a decision has been shown to independently affect that decision's outcome. Complex Booleans need to have truth tables developed to set each variable (inside a Boolean expression) to both TRUE and FALSE.

Monday, September 28, 2009

Verification Vs Validation

Verification:
Verification ensures the product is designed to deliver all functionality to the customer;

Verification ensures that the application complies to standards and processes. This answers the question " Did we build the right system? "
Eg: Design reveiws, code walkthroughs and inspections.

It typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, and walkthroughs and inspection meetings.

Checking documents for defects(I mean to say uncovered requirements). Here no code will be executed. Before buliding actual sytem this checking will be done. This process will be called verification or Quality Assurence.

Verification is a process in which information is checked using accurate measures.
E.g. when you enter a new password you are asked to retype it to verify that the password supplied is correct.

Validation:
Validation ensures that functionality, as defined in requirements, is the intended behavior of the product;

Validation ensures whether the application is built as per the plan. This answers the question " Did we build the system in the right way? ".
Eg: Unit Testing, Integration Testing, System Testing and UAT.

Validation typically involves actual testing and takes place after verifications are completed. Where as Validation concern, checking will be done by executing code for errors (defects.). This also can be calles as Qulity Control.

Validation however is the automatic process in which rules are applied in order to make information correct,
E.g. if the right type of data is entered in a certain cell in a database.

“Verification is done through out the life cycle of the project/product where as validation comes in the testing phase of the application”.

Saturday, September 26, 2009

Honey Well Question Paper for experinced guys in Aero Domain

Honey Well Question Paper:

1
void main()
{
int i=32;
char *ptr = (char *) &i;
printf( "%d" , *ptr);
}

2
#define X 5+2
void main()
{
int i;
i = X * X * X;
printf( "%d" , i );
}

3
void main()
{
char c = 125;
c = c + 10;
printf( "%d" , c );
}

4
void main()
{
int i = 4, x;
x = ++i + ++i + ++i;
printf( "%d" , x );
}

5
void main()
{
register int i,x;
scanf( "%d" , &i );
x = ++i + ++i + ++i;
printf( "%d" , x);
}

6
void main()
{
int a = 5;
int b = 10;
{
int a = 2;
a++;
b++;
}
printf( "%d %d" , a , b );
}

7
enum color { red, green = -20, blue ,yellow };
void main()
{
enum color x;
x = yellow;
printf( "%d" , x );
}

8
void main()
{
char c = '0';
printf( "%d %d" , sizeof(c) , sizeof('0') );
}

9
void main()
{
int i =3;
if ( 3==i )
{
printf( "%d" , i << 2 << 1);
}
else
{
printf( " NOT EQUAL" );
}
}

10
void main()
{
const int i = 5;
i++;
printf( "%d" , i );
}

11
void main()
{
const int x = 25;
int *const p = 2x;
*p = 2 * x;
printf( "%d" , x );
}

12
void main()
{
int i = 5 , j = 2;
if ( ++i > j++ i++ > j++ );
printf( "%d" , i + j);
}

13
#define MAX 5
void main()
{
int i = 0;
i = MAX++;
printf( "%d" , i++);
}

14
void main()
{
int a = 5, b = 10, c = 15;
int *array[] = { &a , &b , &c };
printf( "%d" , *array[] );
}

15
void main()
{
int array[1] = {5};
int i ;
for ( i = 0 ; i <= 2 ; i++ )
{
printf( "%d" , array[i] );
}
}

16.LATE BINDING the function calls get resolve during?
-Compilation
-Run time
-Both A & B
-None of Above

17.Inheritance in C++ have default access specifier as?
-PRIVATE
-PUBLIC
-PROTECT
-None of above

18.Convert C++ file to object module is done by?
-Compiler
-Linker
-Both A & B
-none of above

19.Members of the class are by default?
-PRIVATE
-PUBLIC
-PROTECT
-None of above

20.Reference to its own class can be accepted by?
-Simple constructor
-Copy constructor
-Both A & B
-None of the above

21.STRICT parameter type checking is followed with which of the following?
-Inline
-Macros
-Both A & B
-None of Above

22.Friend function of a class in C++ can access as?
-Private member of the class
-Protect member of the class
-Both A & B
-None of above

23.Which is exist in C++
-Virtual destructor
-Virtual constructor
-Both A & B
-None of above

24.(Not sure what exactly the QUS)
Static in C++
-Class
-Object
-Both A & B
-None of above

25.What is the value of EOF(End of file)?
- 1
- 0
- Infinity
- -1

Monday, September 14, 2009

Micro controller Vs Micro Processor

Microcontroller:
A Microcontroller is essentially a small and self- sufficient computer on a chip, used to control devices It has all the memory and I/O it needs on board Is not expandable –no external bus interface Characteristics of a Microcontroller
• Low cost, on the order of $1
• Low speed, on the order of 10 KHz –20 MHz
• Low Power, extremely low power in sleep mode
• Small architecture, usually an 8-bit architecture
• Small memory size, but usually enough for the type of application it is intended for. Onboard Flash.
• Limited I/O, but again, enough for the type of application intended for.

Microprocessor:
A Microprocessor is fundamentally a collection of on/off switches laid out over silicon in order to perform computations Characteristics of a Microprocessor
• High cost, anywhere between $20 -$200 or more!
• High speed, on the order of 100 MHz –4 GHz
• High Power consumption, lots of heat
• Large architecture, 32-bit, and recently 64-bit architecture
• Large memory size, onboard flash and cache, with an external bus interface for greater memory usage
• Lots of I/O and peripherals, though Microprocessors
tend to be short on General purpose I/O

Wednesday, July 22, 2009

Cisc Vs Risc

Cisc Processors:

•Complex addressing modes
• More number of clock cycles are required to execute each instruction
• More number of instructions
• Instructions are Microcoded
Ex: Intel x86, Motorola 68000

Risc processors:

•Simple addressing modes
• One clock cycles is required to execute each instruction
• Less number of instructions
• Instructions execute in the hardware directly
Ex: ARM,PIC,

Von Neumann architecture Vs Harvard Architecture

Von Neumann architecture:


n Memory holds data, instructions.

n CPU first fetches instructions from memory and then data (if required for executing that instruction).


Harvard Architecture:



n Separate memory blocks for program (instructions) and data

n More efficient since accessing instructions and data can be done in parallel

n Data memory accessed more frequently than program memory

Example: Blackfin Processors









Structural Coverage Analysis Objectives

Software coverage analysis is used to determine which requirements were not tested.
This is supported by the structural coverage analysis objectives required by DO-178B that are intended to determine what software structures (e.g. statements or decisions) were not exercised as a result of these verification activities.
This, in turn, reveals requirements that may have been in error, tests that were lacking adequate coverage for these structures, or dead code.

SC, DC, MC/DC
DO-178B defines Statement Coverage, Decision Coverage and Modified Condition/Decision Coverage as follows:

Modified Condition/Decision Coverage (MC/DC) -
Every point of entry and exit in the program has been invoked at least once, every condition in a decision in the program has taken all possible outcomes at least once, every decision in the program has taken on all possible outcomes at least once, and each condition in a decision has been shown to independently affect that decision’s outcome.

Decision Coverage (DC) -
Every point of entry and exit in the program has been invoked at least once and every decision in the program has taken on all possible outcomes at least once.

Statement Coverage (SC) -
Every statement in the program has been invoked at least once.

Overview of SEI-CMM



Levels of Maturity:
The CMM suggests five levels of maturity of the company process. The maturity level of a
company indicates the effectiveness of company's software development practices.
Maturity level 1 is the lowest one and at this level, the practices of the company are vague
and does not provide stable environment for developing and maintaining the software.
Maturity level 5 is the highest and at this level the company is said to be at the optimizing
level and has practices to develop and maintain software efficiently. The five levels are
briefly described below and is pictorially represented in Fig 1.
Level 1: Initial Level
At this level, the organization does not have any defined process and the development
activities are chaotic. The success and repeatability depends on the individual/team but
there is no structured help from the organization.


Level 2: Repeatable Level
At the Repeatable Level, policies for managing a software project and procedures to
implement those policies are established. Planning and managing new projects is
based on experience with similar projects. Process capability is enhanced by establishing
basic process management discipline on a project by project basis. An effective
process can be characterized as one which is practiced, documented, enforced, trained,
measured, and able to improve.


Level 3: Defined Level
At the Defined Level, the standard process for developing and maintaining software
across the organization is documented, including both software engineering and
management processes, and these processes are integrated into a coherent whole.
This standard process is referred to throughout the CMM as the organization's standard
software process. Processes established at Level 3 are used (and changed, as
appropriate) to help the software managers and technical staff perform more effectively.


Level 4: Managed Level
At the Managed Level, the organization sets quantitative quality goals for both software
products and processes. Productivity and quality are measured for important software
process activities across all projects as part of an organizational measurement program.
An organization-wide software process database is used to collect and analyze the data
available from the projects' defined software processes. Software processes are
instrumented with well-defined and consistent measurements at Level 4. These measurements
establish the quantitative foundation for evaluating the projects' software
processes and products.


Level 5: Optimizing
At the Optimizing Level, the entire organization is focused on continuous process
improvement. The organization has the means to identify weaknesses and strengthen
the process proactively, with the goal of preventing the occurrence of defects. Data on the
effectiveness of the software process is used to perform cost benefit analyses of new
technologies and proposed changes to the organization's software process. Innovations
that exploit the best software engineering practices are identified and transferred
throughout the organization.
Advantages of using CMM
By properly implementing the CMM recommendations, organizations will have the
following advantages and gains, but not just limited to them.
1. Mature development procedures
2. Better Risk Management and mitigation plans
3. Better defect prevention mechanism
4. High visibility on deliverable quality with minimum overheads
5. Better estimations based on available metrics
6. Better documentation at each stage
7. Better Team Management with clear roles and responsibilities
8. Organizations focus on continuous process improvement
9. Identification of process weakness and effort to rectify them


Overview on DO-178B

What is DO-178B?
DO-178B/ED-12B provides guidance on designing, specifying, developing, testing and deploying software in safety-critical avionics systems. In sum DO-178B is a guideline for determining, in a consistent manner and with an acceptable level of confidence, that the software aspects of airborne systems and equipment comply with FAA airworthiness requirements.

Scope of DO-178B:
• Cover engineering process and some support process
• Does not cover organization, management, and customer-supplier relationship processes
• Life cycle data description.

DO-178B Levels:
DO-178B software levels (A, B, etc.) are based on the potential of the software to cause safety-related failures identified in the system safety assessment. DO-178B has five levels of certification:

1.Level A: Software whose failure would cause or contribute to a catastrophic failure of the aircraft. (e.g., aircraft crash).
2.Level B: Software whose failure would cause or contribute to a hazardous/severe failure condition. (e.g., several persons could be injured).
3.Level C: Software whose failure would cause or contribute to a major failure condition. (e.g., flight management system could be down, the pilot would have to do it manually).
4.Level D: Software whose failure would cause or contribute to a minor failure condition. (e.g., some pilot-ground communications could have to be done manually).
5.Level E: Software whose failure would have no effect on the aircraft or on pilot workload. (e.g., entertainment features may be down).

According to the DO-178B-level the following test coverage (code coverage) is required :
DO-178B Level A:
Modified Condition Decision Coverage (MC/DC)
Branch/Decision Coverage Statement Coverage
DO-178B Level B:
Branch/Decision Coverage
Statement Coverage
DO-178B Level C:
Statement Coverage


DO-178B Documents needed for Certification:(All items are not required at all certification levels.)
Plan for Software Aspects of Certification (PSAC)
Software Development Plan (SDP)
Software Verification Plan (SVP)
Software Configuration Management Plan (SCMP)
Software Quality Assurance Plan (SQAP)
Software Requirements Standards (SRS)
Software Design Standards (SDS)
Software Code Standards (SCS)
Software Requirements Data (SRD)
Software Design Description (SDD)
Software Verification Cases and Procedures (SVCP)
Software Life Cycle Environment Configuration Index (SECI)
Software Configuration Index (SCI)
Software Accomplishment Summary (SAS)





DO-178B Records for certification :


Software Verification Results (SVR)
Problem Reports
Software Configuration Management Records
Software Quality Assurance Records

For each software level, DO 178B identifies a specific set of objectives that must be satisfied:
Level A – 66 objectives
Level B – 65 objectives
Level C – 57 objectives
Level D – 28 objectives
Level E – none




Advantages of DO-178B:
By using the DO-178B or similar standards like ED-12B, organizations will have the
following advantages.
1. High degree of product focus leading to quality product.
2. Safety assessment of the product done in accordance with its role. The safety assess
ment is done at the beginning of the development cycle and based on the assess
ment the objectives for the level are complied with.
3. Very good verification & validation procedures to remove defects at each stage.
Procedures like MCDC test are done to remove all possible defects in the system.
4. Gives framework for development of safety critical systems
5. Makes sure that qualified tools and other COTS (Commercial Off The Shelf software)software are only used for critical systems by evaluating the procedure adopted in the development of such tools &COTS software.
6. Clear documentation that will facilitate certification and long product life cycles.


The software life cycle processes are:
1.PLANNING Process
The software planning process that defines and coordinates the activities of the software development and integral processes for a project.

2.DEVELOPMENT Process

The software development processes that produce the software product.
1.Software Requirements Process
2.Software Design Process
3.Software Coding ProcessIntegration process.

3.INTEGRAL Process

The Integral Processes that ensure the correctness, control, and confidence of the software life cycle processes and their outputs.
1.Software Verification Process
2.Software Configuration Management Process
3.Software Quality Assurance Process
4.Certification Liaison Process

Note: The integral processes are performed concurrently with the software development processes throughout the software life cycle.

Tuesday, July 14, 2009

Constant Vs Volatile variables

Constant variables:
The const type qualifier declares an object/variable to be nonmodifiable but optimization can be done on it.

Volatile variables:
Volatile is a qualifier that is applied to a variable.A variable should be declared volatile whenever its value could change unexpectedly and no optimization is done on it.

In practice, only three types of variables could change:
a.Memory-mapped peripheral registers
b.Global variables modified by an interrupt service routine
c.Global variables within a multi-threaded application

If we do not use volatile qualifier the following problems may arise:
1.Code that works fine-until you turn optimization on
2.Code that works fine-as long as interrupts are disabled
3.Flaky hardware drivers
4.Tasks that work fine in isolation-yet crash when another task is enabled

The volatile specifier disables various optimizations that a compiler might automatically apply and thus introduce bugs.
1) Some hardware interfacing applications periodically read values from a hardware port and copy it to a local variable for further processing. A good example is a timer function that reads the current count from an external clock. If the variable is const, the compiler might skip subsequent reads to make the code more efficient and store the result in a CPU register. To force it to read from the port every time the code accesses it, declare the variable const volatile.

2) In multithreaded applications, thread A might change a shared variable behind thread B’s back. Because the compiler doesn’t see any code that changes the value in thread B, it could assume that B’s value has remained unchanged and store it in a CPU register instead of reading it from the main memory. Alas, if thread A changes the variable’s value between two invocations of thread B, the latter will have have an incorrect value. To force the compiler not to store a variable in a register, declare it volatile.

Constant volatile variables:
Can a variable be both Constant and Volatile?
Yes. An example is a read-only status register. It is volatile because it can change
unexpectedly. It is const because the program should not attempt to modify it

Wednesday, July 8, 2009

OSI Model in Networking.

The OSI 7 layers model has clear characteristics at each layer.
Basically, layers 7 through 4 deal with end-to-end communications between data source and destinations, while layers 3 to 1 deal with communications between network devices.

On the other hand, the seven layers of the OSI model can be divided into two groups: upper layers (layers 7, 6 & 5) and lower layers (layers 4, 3, 2, 1).

The upper layers of the OSI model deal with application issues and generally are implemented only in software.

The highest layer, the application layer, is closest to the end user. The lower layers of the OSI model handle data transport issues. The physical layer and the data link layer are implemented in hardware and software.
The lowest layer, the physical layer, is closest to the physical network medium (the wires, for example) and is responsible for placing data on the medium.

Layer 7: Application Layer: Provides standardized services such as virtual terminal, file and job transfer and operations.

Layer 6: Presentation Layer: Specifies architecture-independent data transfer format. The presentation layer works to transform data into the form that the application layer can accept. This layer formats and encrypts data to be sent across a network, providing freedom from compatibility problems. It is sometimes called the syntax layer.

Layer 5: Session Layer: Controls establishment and termination of logic links between users. It provides for full-duplex, half-duplex, or simplex operation, and establishes check pointing, adjournment, termination, and restart procedures.

Layer 4: Transport Layer: Provides reliable and sequential packet delivery through error recovery and flow control mechanisms. The Transport Layer controls the reliability of a given link through flow control, segmentation/desegmentation, and error control.

Layer 3: Network Layer: Routes packets according to unique network device addresses. The Network Layer performs network routing functions, and might also perform fragmentation and reassembly, and report delivery errors.

Layer 2: Data Link Layer: Frames packets. It is concern with physical addressing, physical link management, network topology, error notification and the flow of control .

Layer 1: Physical Layer: Interfaces between network medium and devices. The Physical Layer defines the electrical and physical specifications for devices. In particular, it defines the relationship between a device and a physical medium. This includes the layout of pins, voltages, cable specifications, Hubs, repeaters, network adapters, Host Bus Adapters (HBAs used in Storage Area Networks) and more.

Mutex Vs Semaphore

Mutex:Is a key to a room. One person can have the key - occupy the room - at the time. When finished, the person gives (frees) the key to the next person in the queue.

Officially: "Mutexes are typically used to serialise access to a section of re-entrant code that cannot be executed concurrently by more than one thread.
A mutex object only allows one thread into a controlled section, forcing other threads which attempt to gain access to that section to wait until the first thread has exited from that section."(A mutex is really a semaphore with value 1.)

Semaphore:Is the number of free identical room keys. Example, say we have four rooms with identical locks and keys. The semaphore count - the count of keys - is set to 4 at beginning (all four rooms are free), then the count value is decremented as people are coming in. If all rooms are full, ie. There are no free keys left, the semaphore count is 0. Now, when eq. one person leaves the room, semaphore is increased to 1 (one free key), and given to the next person in the queue.

Officially: "A semaphore restricts the number of simultaneous users of a shared resource up to a maximum number. Threads can request access to the resource (decrementing the semaphore), and can signal that they have finished using the resource (incrementing the semaphore)."

While a mutex will only let one owner attempt access, a Semaphore can be assigned a number and allow "x" number of threads access.

Types of Kernel

Monolithic Kernel (Macro Kernel): Kernal Image = (Kernel Core+Kernal Services). When system boots up entire services are loaded and resides in memory.
Example: Windows and Unix.

Micro kernel: Kernel Image = Kernel Core. Services are built in to special modules which can be loaded and unloaded as per need.
Example: VxWorks , MINUX

We have another type of kernel integration technique called Modular. This is derived from best of micro and monolithic kernel).
In Modular kernel integration: Kernel Image = (Kernel core + IPC service modules +Memory module +Process Management module).
All other modules are loadable kernel modules.
Example: Linux kernel

RTOS Vs GPOS

1.RTOS(Real time operating system) is mostly a "time triggerred". where as GPOS(General purpose operating systems are "event triggerred" .
2.GPOS are "Unpredictable" while RTOS are "predictable" .
3.RTOS uses "priority preempitve scheduling" while most GPOS uses "round robbin" way of scheduling.

Some core functional similarities between a typical RTOS and GPOS include:

1.some level of multitasking,
2.software and hardware resource management,
3.provision of underlying OS services to applications, and
4.abstracting the hardware from the software application.

On the other hand, some key functional differences that set RTOSes apart from GPOSes include:

1.better reliability in embedded application contexts,
2.the ability to scale up or down to meet application needs,
3.faster performance,
4.reduced memory requirements,
5.scheduling policies tailored for real-time embedded systems,
6.support for diskless embedded systems by allowing executables to boot and run from ROM or RAM, and
7.better portability to different hardware platforms.

Difference between Hard realtime and Soft realtime systems

A hard real-time system is a real-time system that must meet its deadlines with a near-zero degree of flexibility. The deadlines must be met, or catastrophes occur. The cost of such catastrophe is extremely high and can involve human lives. The computation results obtained after the deadline have either a zero-level of usefulness or have a high rate of depreciation as time moves further from the missed deadline before the system produces a response.
Eg:Missile tracking systems,Remote sensing e.t.c

A soft real-time system is a real-time system that must meet its deadlines but with a degree of flexibility. The deadlines can contain varying levels of tolerance, average timing deadlines, and even statistical distribution of response times with different degrees of acceptability. In a soft real-time system, a missed deadline does not result in system failure, but costs can rise in proportion to the delay, depending on the application.
Eg:VCD players,washing machines e.t.c.

Different types of RAM(Random Acces Memory)

•Dynamic RAM (DRAM)—DRAM is a RAM device that requires periodic refreshing to retain its content.

•Static RAM (SRAM)—SRAM is a RAM device that retains its content as long as power is supplied by an external power source. SRAM does not require periodic refreshing and it is faster than DRAM.

•Non-Volatile RAM (NVRAM)—NVRAM is a special type of SRAM that has backup battery power so it can retain its content after the main system power is shut off. Another variation of NVARM combines SRAM and EEPROM so that its content is written into the EEPROM when power is shut off and is read back from the EEPROM when power is restored.

Prefferred coding languages for embedded systems

C gives embedded programmers an extraordinary degree of direct hardware control without sacrificing the benefits of high-level languages.Few popular high-level languages can compete with C in the production of compact, efficient code for almost all processors. And, of these, only C allows programmers to interact with the underlying hardware so easily.

C++ is an object-oriented superset of C that is increasingly popular among embedded programmers. All of the core language features are the same as C, but C++ adds new functionality for better data abstraction and a more object-oriented style of programming. These new features are very helpful to software developers, but some of them do reduce the efficiency of the executable program. So C++ tends to be most popular with large development teams, where the benefits to developers outweigh the loss of program efficiency.

Ada is also an object-oriented language, though it is substantially different than C++. Ada was originally designed by the U.S. Department of Defense for the development of mission-critical military software. Despite being twice accepted as an international standard (Ada83 and Ada95), it has not gained much of a foothold outside of the defense and aerospace industries. And it is losing ground there in recent years. This is unfortunate because the Ada language has many features that would simplify embedded software development if used instead of C++.

Tuesday, July 7, 2009

Role of Linker and Loader.

A compiler can be viewed as a program that accepts a source code (such as a Java program) and generates machine code for some computer architecture.

Soon after The compiler generates machine code, it is written in an object file.

An object file contains:

–Code (for methods, etc.)
–Variables (e.g., values for global variables)
–Debugging information
–References to code and data that appear elsewhere (e.g., printf)
–Tables for organizing the above.

This file is not executable since it may refer to external symbols (such as system calls).The operating system provides the following utilities to execute the code:

1.linking: A linker takes several object files and libraries as input and produces one executable object file. It retrieves from the input files (and puts them together in the executable object file) the code of all the referenced functions/procedures and it resolves all external references to real addresses. The libraries include the operating sytem libraries, the language-specific libraries, and, maybe, user-created libraries.

"Linking is simply the process of placing the address of a called function into the calling function's code. This is a fundamental software concept."

2.loading: A loader loads an executable object file into memory, initializes the registers, heap, data, etc and starts the execution of the program.

Linkers vs. Loaders:

Linkers and loaders perform various related but conceptually different tasks:

Program Loading: This refers to copying a program image from hard disk to the main memory in order to put the program in a ready-to-run state. In some cases, program loading also might involve allocating storage space or mapping virtual addresses to disk pages.

Relocation:
Compilers and assemblers generate the object code for each input module with a starting address of zero. Relocation is the process of assigning load addresses to different parts of the program by merging all sections of the same type into one section. The code and data section also are adjusted so they point to the correct runtime addresses.

Symbol Resolution: A program is made up of multiple subprograms; reference of one subprogram to another is made through symbols. A linker's job is to resolve the reference by noting the symbol's location and patching the caller's object code.

Wednesday, June 24, 2009

DMA (Direct Memory Access) and its Cycle stealing

Direct Memory Access:

  1. Blocks of data are transferred between an external device and the main memory, without continuous intervention by the processor.
  2. A DMA controller temporarily
    a.borrows the address bus, data bus, and control bus from the microprocessor and
    b.transfers the data bytes directly between an I/O port and a series of memory locations.
  3. Two control signals are used to request and acknowledge a DMA transfer in the microprocessor-based system.
  4. The HOLD signal is a bus request signal which asks the microprocessor to release control of the buses after the current bus cycle.
  5. The HLDA signal is a bus grant signal which indicates that the microprocessor has indeed released control of its buses by placing the buses at their high-impedance states.
  6. The HOLD input has a higher priority than the INTR or NMI interrupt inputs.
  7. OS is also responsible for suspending the execution of one program and starting another.
    –OS puts the program that requested the transfer in the Blocked state,
    –initiates the DMA operation,
    –starts execution of another program.
  8. When the transfer is complete, the DMA controller informs the processor by sending an interrupt request.
  9. OS puts suspended program in the Runnable state so that it can be selected by the scheduler to continue execution.

Cycle Stealing:

  1. Requests by DMA devices for using the bus are alwas given higher priority than processor requests.
  2. Among different DMA devices, top priority is given to high-speed peripherals (disks, high-speed network interface, graphics display device).
  3. Since the processor initiates most memory access cycles, it is often stated that DMA steals memory cycles from the processor (cycle stealing) for its purpose.
  4. If DMA controller is given exclusive access to the main memory to transfer a block of data without interruption, this is called block or burst mode.

Searching file on Linux machine

find:
Command used to find the files on linux environment.

Syntax goes like this
find [starting point] [search criteria] [action]

Some of the ways to search files listed below for reference:

1.Basic usage:
find . -name “*.jpg”

Explanation: find is the command, the dot ‘.‘ means start from the current directory, and the -name “*.jpg” tells find to search for files with .jpg in the name. The * is a wild card.

2.Find all css files in the ‘/var/www‘ directory:

find /var/www -name “*.css” -print

3.Find Files by Size

a.Find all ‘.txt‘ files that are less than 100kb in size.

find . -name *.txt -size -100k -ls

b.Find Files over a GB in size

find ~/Movies -size +1024M

c.Find all files that are over 40kb in size.

find . -size +40k -ls

4.Find and remove Files:

The power comes when you want to apply an action with the search. This command will find all css files and then remove them.

find . -name “*.css”-exec rm -rf {} \;


It is worth noting that find is recursive so be very careful when using the ‘-exec‘ flag. You could accidentally delete all css files in your computer if you are in the wrong directory. It is always a good idea to run find by itself before adding the -exec flag.

Friday, June 5, 2009

What are Exceptions and Interrupts?

An exception is any event that disrupts the normal execution of the processor and forces the processor into execution of special instructions in a privileged state. Exceptions can be classified into two categories: synchronous exceptions and asynchronous exceptions.
Exceptions raised by internal events, such as events generated by the execution of processor instructions, are called synchronous exceptions.
Examples of synchronous exceptions include the following:
1.On some processor architectures, the read and the write operations must start at an even memory address for certain data sizes. Read or write operations that begin at an odd memory address cause a memory access error event and raise an exception (called an alignment exception ).
2.An arithmetic operation that results in a division by zero raises an exception.

Exceptions raised by external events, which are events that do not relate to the execution of processor instructions, are called asynchronous exceptions. In general, these external events are associated with hardware signals. The sources of these hardware signals are typically external hardware devices.
Examples of asynchronous exceptions include the following:
1.Pushing the reset button on the embedded board triggers an asynchronous exception (called the system reset exception ).
2.The communications processor module that has become an integral part of many embedded designs is another example of an external device that can raise asynchronous exceptions when it receives data packets.

An interrupt, sometimes called an external interrupt, is an asynchronous exception triggered by an event that an external hardware device generates. Interrupts are one class of exception. What differentiates interrupts from other types of exceptions, or more precisely what differentiates synchronous exceptions from asynchronous exceptions, is the source of the event.
The event source for a synchronous exception is internally generated from the processor due to the execution of some instruction. On the other hand, the event source for an asynchronous exception is an external hardware device.
Applications of Exceptions and Interrupts :
In general, exceptions and interrupts help the embedded engineer in three areas:
1.internal errors and special conditions management,
2.hardware concurrency, and
3.service requests management.

For example, an embedded application running on a core processor issues work commands to a device. The embedded application continues execution, performing other functions while the device tries to complete the work issued. After the work is complete, the device triggers an external interrupt to the core processor, which indicates that the device is now ready to accept more commands. This method of hardware concurrency and use of external interrupts is common in embedded design.

Wednesday, June 3, 2009

Difference between Symbolic link and Hard link?

Linux has two kinds of file system links: symbolic links andhard links.

A symbolic link — also called a soft link or symlink —resembles a Windows shortcut. A symlink is a little file that contains the pathname of another object on thefilesystem: a file, a directory, a socket, and so on —possibly even the pathname of another link. This pathnamecan be absolute or relative. To make a symlink, use ln withthe -s option. Give the name of the target first, then thename of the link.

# ln –s existing-file-name link-name

We can still edit the original file by opening the symboliclink, and changes we make doing that will "stick." But if wedelete the symbolic link, it has no impact on the originalfile at all. If we move or rename the original file, the symbolic link is "broken," it continues to exist but it points at nothing.

What happens if I edit the link?
Any modifications to the linked file will be changed on the original file.
What happens if I delete the link?
If you delete the link the original file is unchanged. It will still exist.
What happens if I delete the original file but not the link?
The link will remain but will point to a file that does not exist. This is called an orphaned or dangling link.



A hard link isn’t itself a file. Instead, it’s a directoryentry. It points to another file using the file’s inodenumber. Means both have same inode number. Any changes tothe original file will get reflected in the link file alsoas both are same.

# ln existing-file-name link-name

To give a file more than one name or to make the same fileappear in multiple directories, you can make links to thatfile instead of copying it. One advantage of this is that alink takes little or even no disk space. Another is that, ifyou edit the target of the link, those changes can be seenimmediately through the link.


The Difference Between Soft and Hard Links
Hard links
Only link to a file not a directory
Can not reference a file on a different disk/volume
Links will reference a file even if it is moved
Links reference inode/physical locations on the disk

Symbolic (soft) links
Can link to directories
Can reference a file/folder on a different hard disk/volume
Links remain if the original file is deleted
Links will NOT reference the file anymore if it is moved
Links reference abstract filenames/directories and NOT physical locations. They are given their own inode

Linux_Interprocess Communication Mechanisms

Interprocess Communication Mechanisms:
Processes communicate with each other and with the kernel to coordinate their activities. Linux supports a number of Inter-Process Communication (IPC) mechanisms.
Signals:
Signals are one of the oldest inter-process communication methods used by Unix TM systems. They are used to signal asynchronous events to one or more processes. A signal could be generated by a keyboard interrupt or an error condition such as the process attempting to access a non-existent location in its virtual memory. Signals are also used by the shells to signal job control commands to their child processes.
Pipes and FIFOS(Nmaed Pipes):
A Pipe is a method of connecting the standard output of one process to the
standard input of another.
This feature is widely used, even on the UNIX command line (in the shell).
ls sort lp

Pipes and FIFOs (also known as named pipes) provide a unidirectional interprocess communication channel. A pipe has a read end and a write end. Data written to the write end of a pipe can be read from the read end of the pipe.
A pipe is created using pipe(2), which creates a new pipe and returns two file descriptors, one referring to the read end of the pipe, the other referring to the write end. Pipes can be used to create a communication channel between related processes; see pipe(2) for an example.

A FIFO (short for First In First Out) has a name within the file system (created using mkfifo(3)), and is opened using open(2). Any process may open a FIFO, assuming the file permissions allow it. The read end is opened using the O_RDONLY flag; the write end is opened using the O_WRONLY flag.

Shared Memory:

To transfer large amount of data between kernel and user process, shared memory is provided.
mbuff driver used for shared memory. Any real-time or kernel task or user process can access this memory at any time.

Semaphores:

A semaphore is like a key that allows a task to carry out some operation or to access a resource. If the task can acquire the semaphore, it can carry out the intended operation or access the resource.

A kernel can support many different types of semaphores, including 1.Binary and 2.Counting semaphores
Binary Semaphores :

A binary semaphore can have a value of either 0 or 1. When a binary semaphore’s value is 0, the semaphore is considered unavailable (or empty); when the value is 1, the binary semaphore is considered available (or full ).

Counting Semaphores :

A counting semaphore uses a count to allow it to be acquired or released multiple times. When creating a counting semaphore, assign the semaphore a count that denotes the number of semaphore tokens it has initially.

One or more tasks can continue to acquire a token from the counting semaphore until no tokens are left. When all the tokens are gone, the count equals 0, and the counting semaphore moves from the available state to the unavailable state. To move from the unavailable state back to the available state, a semaphore token must be released by any task.
Note that, as with binary semaphores, counting semaphores are global resources that can be shared by all tasks that need them. This feature allows any task to release a counting semaphore token. Each release operation increments the count by one, even if the task making this call did not acquire a token in the first place.

Message Queues:

A message queue is a buffer-like object through which tasks and ISRs send and receive messages to communicate and synchornize with data. A message queue is like a pipeline. It temporarily holds messages from a sender until the intended receiver is ready to read them. This temporary buffering decouples a sending and receiving task; that is, it frees the tasks from having to send and receive messages simultaneously.

The message queue itself consists of a number of elements, each of which can hold a single message. The elements holding the first and last messages are called the head and tail respectively. Some elements of the queue may be empty (not containing a message). The total number of elements (empty or not) in the queue is the total length of the queue



Monday, June 1, 2009

Macro Delay

1.Macro way of introducing a 10micro sec delay in C:

#define Nop() {__asm ("nop");}
// 1Nop = 1US

#define DELAY_10_US() \

Nop(); \

Nop(); \

Nop(); \

Nop(); \

Nop(); \

Nop(); \

Nop(); \

Nop(); \

Nop(); \

Nop()

What is an Interrupt Service Routine?


  1. An Interrupt service routine (ISR), also known as an Interrupt handler.

  2. These handlers are initiated by either hardware interrupts or interrupt instructions in software, and are used for servicing hardware devices and transitions between protected modes of operation such as system calls.

For Example:Here is a scenario where the hardware does not support identifying the device that initiated the interrupt. In such cases, the possible interrupting devices need to be polled in software:

1.A device asserts the interrupt signal at a hardwired interrupt level.


2.The processor registers the interrupt and waits to finish the current instruction execution.

3.Once the current instruction execution is completed, the processor initiates the interrupt handling by saving the current register contents on the stack.

4.The processor then switches to supervisor mode and initiates an interrupt acknowledge cycle.

5.The interrupting device responds to the interrupt acknowledge cycle with the vector number for the interrupt.

6.Processor uses the vector number obtained above and fetches the vector.

7.The address found at the vector is the address of the interrupt service routine (ISR) for the interrupting device.


8.After the ISR routine has performed its job, the ISR executes the "return from interrupt" instruction.

9.Execution of the "return from interrupt" instruction results in restoring the processor state. The processor is restored back to user mode.

FAQs on Vxworks tasks

Q: What is a task in VxWorks?
A: A task is an independent program with its own thread of execution and execution context. VxWorks uses a single common address space for all tasks thus avoiding virtual-to-physical memory mapping. Every task contains a structure called the task control block that is responsible for managing the task's context.
Q: How do task's manage the context of execution ?
A: Every task contains a structure called the task control block that is responsible for managing the task's context. A task's context includes
· program counter or thread of execution· CPU registers· Stack of dynamic variables and function calls· Signal handlers· IO assignments· Kernel control structures etc.,
Q: What are the Different states of tasks?
A task has 4 states. Read, Pend, Delay, Suspended
Q: Explain the task State transition ?
A: A task can be created with taskInit() and then activated with taskActivate() routine or both these actions can be performed in a single step using taskSpawn(). Once a task is created it is set to the suspend state and suspended until it is activated, after which it is added to the ready queue to be picked up by the scheduler and run. A task may be suspended by either the debugging your task, or the occurrence an exception. The difference between the pend and suspend states is that a task pends when it is waiting for a resource. A task that is put to sleep is added to delay queue.

FAQs on semaphore/Mutex inVXworks

Q: What are the different types of semaphores in vxworks? Which is the fastest?

A: VxWorks supports three types of semaphores. Binary, mutual exclusion, and counting semaphores. Binary is the fastest semaphore.


Q: When will you use binary semaphore ?

A: Binary semaphores are used for basic task synchronization and communication.


Q: When will you use mutual exclusion semaphore?

A: Mutual exclusion semaphores are sophisticated binary semaphores that are used to address the issues relating to task priority inversion and semaphore deletion in a multitasking environment.


Q: When will you use Counting semaphore?

A: Counting semaphores maintain a count of the number of times a resource is given. This is useful when an action is required for each event occurrence. For example if you have ten buffers, and multiple tasks can grab and release the buffers, then you want to limit the access to this buffer pool using a counting semaphore.


Q: What is the difference between Mutex and binary semaphore?

A: The differences are:

1) Mutex can be used only for mutual exclusion, while binary can be used of mutual exclusion as well as synchronisation.

2) Mutex can be given only by the task that took it.

3) Mutex cannot be given from an ISR.4) Mutual-exclusion semaphores can be taken recursively. This means that the semaphore can be taken more than once by the task that holds it before finally being released.5) Mutex provides a options for making the task that took it as DELETE_SAFE. This means, that the task cannot be deleted when it holds mutex.

Thursday, May 28, 2009

Difference between Unix and Linux

Unix is popular operating system, developed by AT&T in 1969 and it has been very important to the development of the Internet. It is a multi-processing, multi-user, family of operating systems that run on a variety of architechtures. UNIX allows more than one user to access a computer system at the same time.
A widely used Open Source Unix-like operating system. Linux was first released by its inventor Linus Torvalds in 1991. There are versions of Linux for almost every available type of computer hardware from desktop machines to IBM mainframes. The inner workings of Linux are open and available for anyone to examine and change as long as they make their changes available to the public. Because of its robustness and availability, Linux has won popularity in the Open Source community as well as among commercial application developers.

Here is more input:

•Unix requires a more powerful hardware configuration. It will work in large mainframe computers but will not work in an x86 based personal computer. Linux however, (which is built on the concept of Unix) has small hardware requirements and it will work on both a large mainframe computer and an x86 based personal computer.

•Unix is an Operating System developed in olden days in which the kernel, the heart of the OS, interacts directly with the hardware. Because UNIX treats everything as a file, it provides greater security for users. An example of a UNIX distribution is posix. Linux uses a the UNIX architecture as its basis and provides more facilities and applications. Linux could be considered to be a GUI to the UNIX core. Examples of Linux distributions are Redhat, Fedora, Susee, Mandriva, and Ubuntu. Solaris OS also uses the UNIX kernal almost all UNIX commands will work on solaris in addition to 500 Solaris specific commands. Both UNIX and LINUX are Open source.

•Unix is the foundation for a number of operating systems, with Linux being the most popular one. Novell and Free BSD are 2 other commonly used Unix varients.

•UNIX is an operating system created in the early days of computers. More recently, Linux was created as an open-source, freeware operating system. It is "UNIX-LIKE", meaning that it uses many UNIX constructs but also departs from traditional UNIX in many ways. Like UNIX, Linux is faster than many of the other commercially available operating systems. It appears to also be far more robust than any of the Microsoft products. Linux is being used in many time critical applications because of it's speed. It is also used in many applications that need to maintain uptime because Linux, like UNIX, can run for months at a time without rebooting. While the typical method of solving Microsoft problems is to "reboot", that particular requirement does not seem to be appropriate in a Linux/Unix environment. While UNIX has created a windows-like work environment, Linux has improved greatly on that concept. Linux has become a real player in the consumer operating system market... and it's free. While you may want to pay for a Linux distribution, the actual code is free and you are allowed to load it on as many machines as you want. You can get Linux for free if you wish to load it across the internet.

Difference between Procedural and Object Oriented Programming

Procedural:
The problem is decomposed into individual procedures or subroutines. This decomposition is
usually done in a top down manner. In a top down approach, once a section of the problem has been identified as being implementable by a procedure,it too is broken down into individual procedures.The data however, is not usually part of this decomposition.
Eg: C & Pascal

Object-oriented:
The problem is decomposed into interacting objects. Each object encapsulates and hides
methods that manipulate the hidden state of the object. A message sent to an object evokes the encapsulated method that then performs the requested task.
Eg: ADA95,java

Wednesday, May 27, 2009

How does a language compiler work?

=>A compiler for a language generally has several different stages as it
processes the input.

=>They are:
1. Preprocessing
2. Lexical analysis
3. Syntactical analysis
4. Semantical analysis
5. Intermediate code generation
6. Code optimization
7. Code generation


1.Preprocessing:
During the preprocessing stage, comments, macros, and directives are
processed.
Comments are removed from the source file. This greatly simplifies the
later stages.
If the language supports macros, the macros are replaced with the equivalent
text.
The preprocessor may also replace special strings with other characters. In
C and C++, the preprocessor recognizes the \ character as an escape code,
and will replace the escape sequence with a special character. For example
\t is the escape code for a tab, so \t would be replaced at this stage with
a tab character.

2.Lexical analysis:
Lexical analysis is the process of breaking down the source files into
key words, constants, identifiers, operators and other simple tokens. A
token is the smallest piece of text that the language defines.


3.Syntactical analysis:
Syntactical analysis is the process of combining the tokens into
well-formed expressions, statements, and programs. Each language has
specific rules about the structure of a program--called the grammar or
syntax. Just like English grammar, it specifies how things may be put
together. In English, a simple sentence is: subject, verb, predicate

4.Semantic analysis:
Semantic analysis is the process of examining the types and values of the
statements used to make sure they make sense. During the semantic
analysis, the types, values, and other required information about statements
are recorded, checked, and transformed as appropriate to make sure the
program makes sense.

5.Intermediate code generation:
Depending on the compiler, this step may be skipped, and instead the program
may be translated directly into the target language (usually machine object
code). If this step is implemented, the compiler designers also design a
machine independent language of there own that is close to machine language
and easily translated into machine language for any number of different
computers.
The purpose of this step is to allow the compiler writers to support
different target computers and different languages with a minimum of effort.
The part of the compiler which deals with processing the source files,
analyzing the language and generating the intermediate code is called the
front end, while the process of optimizing and converting the intermediate
code into the target language is called the back end.

6. Code optimization
During this process the code generated is analyzed and improved for
efficiency. The compiler analyzes the code to see if improvements can be
made to the intermediate code that couldn't be made earlier. For example,
some languages like Pascal do not allow pointers, while all machine
languages do. When accessing arrays, it is more efficient to use pointers,
so the code optimizer may detect this case and internally use pointers.

7. Code generation
Finally, after the intermediate code has been generated and optimized, the
compiler will generated code for the specific target language. Almost
always this is machine code for a particular target machine.


Also, it us usually not the final machine code, but is instead object code,
which contains all the instructions, but not all of the final memory
addresses have been determined.

A subsequent program, called a linker is used to combine several different object code files into the final executable program.

Wednesday, May 20, 2009

AFDX(Avionics Full DupleX Switched Ethernet)

AFDX - Overview

1.Advanced protocol system to interconnect Avionics subsystem

2.The standard is based on widely approved and adopted standards like Ethernet (IEEE 802.3) and IP/UDP (Internet Protocols)which are applied for sharing information anywhere on the aircraft

3.100 MBit switched Ethernet (wire).

4.Uses a special protocol to provide deterministic timing and redundancy management.

5.The main elements of an AFDX network are:
- AFDX End Systems
- AFDX Switches
- AFDX Links

Topology Description:

1.AFDX is a network than bus.

2.All connections are full duplex (100 MBits/sec).

3.Redundancy is achieved by duplication of the connections (wires) and the Switches.


Specifications of AFDX:

1.UDP/IP protocol including IP fragmentation/reassembly.

2.Virtual Links and Sub-Links.

3.Traffic shaping through bandwidth allocation gap (BAG.

4.Redundancy control.

5.AFDX addressing with multicast and unicast addresses.

6.Sampling and queuing ports for transmit and receive.

7.Autonomously scheduled transmissions.

8.Statistic counters.

Avionics Full-Duplex Switched Ethernet:
Avionics Full-Duplex Switched Ethernet (AFDX) is Part 7 of the ARINC 664 Specification which defines how Commercial Off-the-Shelf (COTS) networking technology will be used for future generation Aircraft Data Networks (ADN).
AFDX is an Airbus trademark, the equivalent Boeing product is known as CDN or Common Data Network.
AFDX defines a low-level network and protocol to communicate between avionics (referred to as End-System) devices in aircraft.
It is based on Ethernet, and like all full-duplex networks uses dedicated outgoing and incoming channels to allow full-speed transmission in both directions at the same time.
AFDX extends standard Ethernet to provide high data integrity and deterministic timing.
It specifies interoperable functional elements at the following OSI Reference Model layers:

Data Link (MAC and Virtual Link addressing concept);
Network (IP and ICMP);
Transport (UDP and optionally TCP)
Application (Network) (Sampling, Queuing, SAP, TFTP and SNMP).
The Physical layer is not defined as part of ARINC 664 Part 7 (AFDX) but can be any of the solutions defined in ARINC 664 Part 2, including:

10BASE-T to support traffic at 10Mbit/s;
100BASE-TX and 100BASE-FX to support traffic at 100 Mbit/s; and
provisions for growth to 1000 Mbit/s operations.

Tuesday, May 19, 2009

Testing Definitions

Black box testing:

not based on any knowledge of internal design or code. Tests are based on requirements and functionality.


White box testing:


based on knowledge of the internal logic of an application’s code. Tests are based on coverage of code statements, branches, paths, conditions.


Unit testing


the most ‘micro’ scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.

Incremental integration testing

continuous testing of an application as new functionality is added; requires that various aspects of an application’s functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.

Integration testing

testing of combined parts of an application to determine if they function together correctly. The ‘parts’ can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

Functional testing

black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn’t mean that the programmers shouldn’t check that their code works before releasing it (which of course applies to any stage of testing.

System testing

black box type testing that is based on overall requirement specifications; covers all combined parts of a system.

End-to-end testing

similar to system testing; the ‘macro’ end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Sanity testing

typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a ’sane’ enough condition to warrant further testing in its current state.

Regression testing

re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.

Acceptance testing

final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.

Load testing

testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the systems response time degrades or fails.

Stress testing

term often used interchangeably with ‘load’ and ‘performance’ testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.

Performance testing

term often used interchangeably with ’stress’ and ‘load’ testing. Ideally ‘performance’ testing (and any other ‘type’ of testing) is defined in requirements documentation or QA or Test Plans.

Usability testing

testing for ‘user-friendliness’. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.

Install/uninstall testing

testing of full, partial, or upgrade install/uninstall processes.

Recovery testing
testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.


Security testing

testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.

Compatibility testing

testing how well software performs in a particular hardware/software/operating system/network/etc. environment.

Exploratory testing

often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.

Ad-hoc testing

similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.

User acceptance testing

determining if software is satisfactory to an end-user or customer.

Comparison testing

comparing software weaknesses and strengths to competing products.

Alpha testing

testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.

Beta testing

testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.

Mutation testing

a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes (’bugs’) and retesting with the original test data/cases to determine if the ‘bugs’ are detected. Proper implementation requires large computational resources.

1. What is Defect?
In computer technology, a Defect is a coding error in a computer program. It is defined by saying that “A software error is present when the program does not do what its end user reasonably expects it to do”.
Bug the application works incorrectly or provides incorrect information.

Monday, May 18, 2009

Types of Software Testing

Hardware/software integration testing: To verify correct operation of the software in the target computer environment.
Software integration testing: To verify the interrelationships between software requirements and components and to verify the implementation of the software requirements and software components within the software architecture.
Low-level testing: To verify the implementation of software low-level requirements.

Objective of HSIT:
The objective of requirements-based hardware/software integration testing is to ensure that the software in the target computer will satisfy the high-level requirements.


Errors revealed by HSIT:
Incorrect interrupt handling.
Failure to satisfy execution time requirements.
Incorrect software response to hardware transients or hardware failures, for example, start-up sequencing, transient input loads and input power transients.
Data bus and other resource contention problems, for example, memory mapping.
Inability of built-in test to detect failures.
Errors in hardware/software interfaces.
Incorrect behavior of feedback loops.
Incorrect control of memory management hardware or other hardware devices under software control.
Stack overflow.
Incorrect operation of mechanism(s) used to confirm the correctness and compatibility of field-loadable software.
Violations of software partitioning

Objective of SIT:
The objective of requirements-based software integration testing is to ensure that the software components interact correctly with each other and satisfy the software requirements and software architecture.


Errors revealed by SIT
Incorrect initialization of variables and constants.
Parameter passing errors.
Data corruption, especially global data.
Inadequate end-to-end numerical resolution.
Incorrect sequencing of events and operations.

Objective of Low-Level testing:
The objective of requirements-based low-level testing is to ensure that the software components satisfy their low-level requirements.

Errors revealed by Low Level Testing
Failure of an algorithm to satisfy a software requirement.
Incorrect loop operations.
Incorrect logic decisions.
Failure to process correctly legitimate combinations of input conditions.
Incorrect responses to missing or corrupted input data.
Incorrect handling of exceptions, such as arithmetic faults or violations of array limits.
Incorrect computation sequence.
Inadequate algorithm precision, accuracy or performance.

Monday, May 11, 2009

Difference between Unformated I/O functions

getchar() :
The getchar() accepts a single charater from the keyboard.you have to specify the end of the input by pressing Enter key because the getchar() accepts bufferred input. This means that the input value is stored in a variable only aftr enter is pressed.
If more than one character is typed before pressing Enter,only the first character is accepted by the getchar() . The rest of the input character remain in th buffer while the first input character is returned by the getchar().

getch():
The getch() accepts a single character from the keyboard. The getch() accepts unbufferred input . This indicates that input is directly stored in a variable. Therefore, there is no need to press Enter key to specify the end of input.
The getch() returns the character you type, and this character is assigned to the char variable.

getche():
The character based input function getche() also accepts unbufferred input as a single character similar to the getch(). How ever , input to the getche() is echoed on the screen. The getche()returns the character accepted from the keyboard.

getc():
The getc() is used to accept a value for a character variable from a file. The file can be a disk file or the standard input device, which is the keyboard.

gets():
Finally, the gets function accepts a sequence of characters with embedded spaces. A sequence of characters with embedded spaces is called string. The end of the string is specified by pressing Enter.