Welcome to graduate2professional.blogspot.com

Wednesday, July 22, 2009

Cisc Vs Risc

Cisc Processors:

•Complex addressing modes
• More number of clock cycles are required to execute each instruction
• More number of instructions
• Instructions are Microcoded
Ex: Intel x86, Motorola 68000

Risc processors:

•Simple addressing modes
• One clock cycles is required to execute each instruction
• Less number of instructions
• Instructions execute in the hardware directly
Ex: ARM,PIC,

Von Neumann architecture Vs Harvard Architecture

Von Neumann architecture:


n Memory holds data, instructions.

n CPU first fetches instructions from memory and then data (if required for executing that instruction).


Harvard Architecture:



n Separate memory blocks for program (instructions) and data

n More efficient since accessing instructions and data can be done in parallel

n Data memory accessed more frequently than program memory

Example: Blackfin Processors









Structural Coverage Analysis Objectives

Software coverage analysis is used to determine which requirements were not tested.
This is supported by the structural coverage analysis objectives required by DO-178B that are intended to determine what software structures (e.g. statements or decisions) were not exercised as a result of these verification activities.
This, in turn, reveals requirements that may have been in error, tests that were lacking adequate coverage for these structures, or dead code.

SC, DC, MC/DC
DO-178B defines Statement Coverage, Decision Coverage and Modified Condition/Decision Coverage as follows:

Modified Condition/Decision Coverage (MC/DC) -
Every point of entry and exit in the program has been invoked at least once, every condition in a decision in the program has taken all possible outcomes at least once, every decision in the program has taken on all possible outcomes at least once, and each condition in a decision has been shown to independently affect that decision’s outcome.

Decision Coverage (DC) -
Every point of entry and exit in the program has been invoked at least once and every decision in the program has taken on all possible outcomes at least once.

Statement Coverage (SC) -
Every statement in the program has been invoked at least once.

Overview of SEI-CMM



Levels of Maturity:
The CMM suggests five levels of maturity of the company process. The maturity level of a
company indicates the effectiveness of company's software development practices.
Maturity level 1 is the lowest one and at this level, the practices of the company are vague
and does not provide stable environment for developing and maintaining the software.
Maturity level 5 is the highest and at this level the company is said to be at the optimizing
level and has practices to develop and maintain software efficiently. The five levels are
briefly described below and is pictorially represented in Fig 1.
Level 1: Initial Level
At this level, the organization does not have any defined process and the development
activities are chaotic. The success and repeatability depends on the individual/team but
there is no structured help from the organization.


Level 2: Repeatable Level
At the Repeatable Level, policies for managing a software project and procedures to
implement those policies are established. Planning and managing new projects is
based on experience with similar projects. Process capability is enhanced by establishing
basic process management discipline on a project by project basis. An effective
process can be characterized as one which is practiced, documented, enforced, trained,
measured, and able to improve.


Level 3: Defined Level
At the Defined Level, the standard process for developing and maintaining software
across the organization is documented, including both software engineering and
management processes, and these processes are integrated into a coherent whole.
This standard process is referred to throughout the CMM as the organization's standard
software process. Processes established at Level 3 are used (and changed, as
appropriate) to help the software managers and technical staff perform more effectively.


Level 4: Managed Level
At the Managed Level, the organization sets quantitative quality goals for both software
products and processes. Productivity and quality are measured for important software
process activities across all projects as part of an organizational measurement program.
An organization-wide software process database is used to collect and analyze the data
available from the projects' defined software processes. Software processes are
instrumented with well-defined and consistent measurements at Level 4. These measurements
establish the quantitative foundation for evaluating the projects' software
processes and products.


Level 5: Optimizing
At the Optimizing Level, the entire organization is focused on continuous process
improvement. The organization has the means to identify weaknesses and strengthen
the process proactively, with the goal of preventing the occurrence of defects. Data on the
effectiveness of the software process is used to perform cost benefit analyses of new
technologies and proposed changes to the organization's software process. Innovations
that exploit the best software engineering practices are identified and transferred
throughout the organization.
Advantages of using CMM
By properly implementing the CMM recommendations, organizations will have the
following advantages and gains, but not just limited to them.
1. Mature development procedures
2. Better Risk Management and mitigation plans
3. Better defect prevention mechanism
4. High visibility on deliverable quality with minimum overheads
5. Better estimations based on available metrics
6. Better documentation at each stage
7. Better Team Management with clear roles and responsibilities
8. Organizations focus on continuous process improvement
9. Identification of process weakness and effort to rectify them


Overview on DO-178B

What is DO-178B?
DO-178B/ED-12B provides guidance on designing, specifying, developing, testing and deploying software in safety-critical avionics systems. In sum DO-178B is a guideline for determining, in a consistent manner and with an acceptable level of confidence, that the software aspects of airborne systems and equipment comply with FAA airworthiness requirements.

Scope of DO-178B:
• Cover engineering process and some support process
• Does not cover organization, management, and customer-supplier relationship processes
• Life cycle data description.

DO-178B Levels:
DO-178B software levels (A, B, etc.) are based on the potential of the software to cause safety-related failures identified in the system safety assessment. DO-178B has five levels of certification:

1.Level A: Software whose failure would cause or contribute to a catastrophic failure of the aircraft. (e.g., aircraft crash).
2.Level B: Software whose failure would cause or contribute to a hazardous/severe failure condition. (e.g., several persons could be injured).
3.Level C: Software whose failure would cause or contribute to a major failure condition. (e.g., flight management system could be down, the pilot would have to do it manually).
4.Level D: Software whose failure would cause or contribute to a minor failure condition. (e.g., some pilot-ground communications could have to be done manually).
5.Level E: Software whose failure would have no effect on the aircraft or on pilot workload. (e.g., entertainment features may be down).

According to the DO-178B-level the following test coverage (code coverage) is required :
DO-178B Level A:
Modified Condition Decision Coverage (MC/DC)
Branch/Decision Coverage Statement Coverage
DO-178B Level B:
Branch/Decision Coverage
Statement Coverage
DO-178B Level C:
Statement Coverage


DO-178B Documents needed for Certification:(All items are not required at all certification levels.)
Plan for Software Aspects of Certification (PSAC)
Software Development Plan (SDP)
Software Verification Plan (SVP)
Software Configuration Management Plan (SCMP)
Software Quality Assurance Plan (SQAP)
Software Requirements Standards (SRS)
Software Design Standards (SDS)
Software Code Standards (SCS)
Software Requirements Data (SRD)
Software Design Description (SDD)
Software Verification Cases and Procedures (SVCP)
Software Life Cycle Environment Configuration Index (SECI)
Software Configuration Index (SCI)
Software Accomplishment Summary (SAS)





DO-178B Records for certification :


Software Verification Results (SVR)
Problem Reports
Software Configuration Management Records
Software Quality Assurance Records

For each software level, DO 178B identifies a specific set of objectives that must be satisfied:
Level A – 66 objectives
Level B – 65 objectives
Level C – 57 objectives
Level D – 28 objectives
Level E – none




Advantages of DO-178B:
By using the DO-178B or similar standards like ED-12B, organizations will have the
following advantages.
1. High degree of product focus leading to quality product.
2. Safety assessment of the product done in accordance with its role. The safety assess
ment is done at the beginning of the development cycle and based on the assess
ment the objectives for the level are complied with.
3. Very good verification & validation procedures to remove defects at each stage.
Procedures like MCDC test are done to remove all possible defects in the system.
4. Gives framework for development of safety critical systems
5. Makes sure that qualified tools and other COTS (Commercial Off The Shelf software)software are only used for critical systems by evaluating the procedure adopted in the development of such tools &COTS software.
6. Clear documentation that will facilitate certification and long product life cycles.


The software life cycle processes are:
1.PLANNING Process
The software planning process that defines and coordinates the activities of the software development and integral processes for a project.

2.DEVELOPMENT Process

The software development processes that produce the software product.
1.Software Requirements Process
2.Software Design Process
3.Software Coding ProcessIntegration process.

3.INTEGRAL Process

The Integral Processes that ensure the correctness, control, and confidence of the software life cycle processes and their outputs.
1.Software Verification Process
2.Software Configuration Management Process
3.Software Quality Assurance Process
4.Certification Liaison Process

Note: The integral processes are performed concurrently with the software development processes throughout the software life cycle.

Tuesday, July 14, 2009

Constant Vs Volatile variables

Constant variables:
The const type qualifier declares an object/variable to be nonmodifiable but optimization can be done on it.

Volatile variables:
Volatile is a qualifier that is applied to a variable.A variable should be declared volatile whenever its value could change unexpectedly and no optimization is done on it.

In practice, only three types of variables could change:
a.Memory-mapped peripheral registers
b.Global variables modified by an interrupt service routine
c.Global variables within a multi-threaded application

If we do not use volatile qualifier the following problems may arise:
1.Code that works fine-until you turn optimization on
2.Code that works fine-as long as interrupts are disabled
3.Flaky hardware drivers
4.Tasks that work fine in isolation-yet crash when another task is enabled

The volatile specifier disables various optimizations that a compiler might automatically apply and thus introduce bugs.
1) Some hardware interfacing applications periodically read values from a hardware port and copy it to a local variable for further processing. A good example is a timer function that reads the current count from an external clock. If the variable is const, the compiler might skip subsequent reads to make the code more efficient and store the result in a CPU register. To force it to read from the port every time the code accesses it, declare the variable const volatile.

2) In multithreaded applications, thread A might change a shared variable behind thread B’s back. Because the compiler doesn’t see any code that changes the value in thread B, it could assume that B’s value has remained unchanged and store it in a CPU register instead of reading it from the main memory. Alas, if thread A changes the variable’s value between two invocations of thread B, the latter will have have an incorrect value. To force the compiler not to store a variable in a register, declare it volatile.

Constant volatile variables:
Can a variable be both Constant and Volatile?
Yes. An example is a read-only status register. It is volatile because it can change
unexpectedly. It is const because the program should not attempt to modify it

Wednesday, July 8, 2009

OSI Model in Networking.

The OSI 7 layers model has clear characteristics at each layer.
Basically, layers 7 through 4 deal with end-to-end communications between data source and destinations, while layers 3 to 1 deal with communications between network devices.

On the other hand, the seven layers of the OSI model can be divided into two groups: upper layers (layers 7, 6 & 5) and lower layers (layers 4, 3, 2, 1).

The upper layers of the OSI model deal with application issues and generally are implemented only in software.

The highest layer, the application layer, is closest to the end user. The lower layers of the OSI model handle data transport issues. The physical layer and the data link layer are implemented in hardware and software.
The lowest layer, the physical layer, is closest to the physical network medium (the wires, for example) and is responsible for placing data on the medium.

Layer 7: Application Layer: Provides standardized services such as virtual terminal, file and job transfer and operations.

Layer 6: Presentation Layer: Specifies architecture-independent data transfer format. The presentation layer works to transform data into the form that the application layer can accept. This layer formats and encrypts data to be sent across a network, providing freedom from compatibility problems. It is sometimes called the syntax layer.

Layer 5: Session Layer: Controls establishment and termination of logic links between users. It provides for full-duplex, half-duplex, or simplex operation, and establishes check pointing, adjournment, termination, and restart procedures.

Layer 4: Transport Layer: Provides reliable and sequential packet delivery through error recovery and flow control mechanisms. The Transport Layer controls the reliability of a given link through flow control, segmentation/desegmentation, and error control.

Layer 3: Network Layer: Routes packets according to unique network device addresses. The Network Layer performs network routing functions, and might also perform fragmentation and reassembly, and report delivery errors.

Layer 2: Data Link Layer: Frames packets. It is concern with physical addressing, physical link management, network topology, error notification and the flow of control .

Layer 1: Physical Layer: Interfaces between network medium and devices. The Physical Layer defines the electrical and physical specifications for devices. In particular, it defines the relationship between a device and a physical medium. This includes the layout of pins, voltages, cable specifications, Hubs, repeaters, network adapters, Host Bus Adapters (HBAs used in Storage Area Networks) and more.

Mutex Vs Semaphore

Mutex:Is a key to a room. One person can have the key - occupy the room - at the time. When finished, the person gives (frees) the key to the next person in the queue.

Officially: "Mutexes are typically used to serialise access to a section of re-entrant code that cannot be executed concurrently by more than one thread.
A mutex object only allows one thread into a controlled section, forcing other threads which attempt to gain access to that section to wait until the first thread has exited from that section."(A mutex is really a semaphore with value 1.)

Semaphore:Is the number of free identical room keys. Example, say we have four rooms with identical locks and keys. The semaphore count - the count of keys - is set to 4 at beginning (all four rooms are free), then the count value is decremented as people are coming in. If all rooms are full, ie. There are no free keys left, the semaphore count is 0. Now, when eq. one person leaves the room, semaphore is increased to 1 (one free key), and given to the next person in the queue.

Officially: "A semaphore restricts the number of simultaneous users of a shared resource up to a maximum number. Threads can request access to the resource (decrementing the semaphore), and can signal that they have finished using the resource (incrementing the semaphore)."

While a mutex will only let one owner attempt access, a Semaphore can be assigned a number and allow "x" number of threads access.

Types of Kernel

Monolithic Kernel (Macro Kernel): Kernal Image = (Kernel Core+Kernal Services). When system boots up entire services are loaded and resides in memory.
Example: Windows and Unix.

Micro kernel: Kernel Image = Kernel Core. Services are built in to special modules which can be loaded and unloaded as per need.
Example: VxWorks , MINUX

We have another type of kernel integration technique called Modular. This is derived from best of micro and monolithic kernel).
In Modular kernel integration: Kernel Image = (Kernel core + IPC service modules +Memory module +Process Management module).
All other modules are loadable kernel modules.
Example: Linux kernel

RTOS Vs GPOS

1.RTOS(Real time operating system) is mostly a "time triggerred". where as GPOS(General purpose operating systems are "event triggerred" .
2.GPOS are "Unpredictable" while RTOS are "predictable" .
3.RTOS uses "priority preempitve scheduling" while most GPOS uses "round robbin" way of scheduling.

Some core functional similarities between a typical RTOS and GPOS include:

1.some level of multitasking,
2.software and hardware resource management,
3.provision of underlying OS services to applications, and
4.abstracting the hardware from the software application.

On the other hand, some key functional differences that set RTOSes apart from GPOSes include:

1.better reliability in embedded application contexts,
2.the ability to scale up or down to meet application needs,
3.faster performance,
4.reduced memory requirements,
5.scheduling policies tailored for real-time embedded systems,
6.support for diskless embedded systems by allowing executables to boot and run from ROM or RAM, and
7.better portability to different hardware platforms.

Difference between Hard realtime and Soft realtime systems

A hard real-time system is a real-time system that must meet its deadlines with a near-zero degree of flexibility. The deadlines must be met, or catastrophes occur. The cost of such catastrophe is extremely high and can involve human lives. The computation results obtained after the deadline have either a zero-level of usefulness or have a high rate of depreciation as time moves further from the missed deadline before the system produces a response.
Eg:Missile tracking systems,Remote sensing e.t.c

A soft real-time system is a real-time system that must meet its deadlines but with a degree of flexibility. The deadlines can contain varying levels of tolerance, average timing deadlines, and even statistical distribution of response times with different degrees of acceptability. In a soft real-time system, a missed deadline does not result in system failure, but costs can rise in proportion to the delay, depending on the application.
Eg:VCD players,washing machines e.t.c.

Different types of RAM(Random Acces Memory)

•Dynamic RAM (DRAM)—DRAM is a RAM device that requires periodic refreshing to retain its content.

•Static RAM (SRAM)—SRAM is a RAM device that retains its content as long as power is supplied by an external power source. SRAM does not require periodic refreshing and it is faster than DRAM.

•Non-Volatile RAM (NVRAM)—NVRAM is a special type of SRAM that has backup battery power so it can retain its content after the main system power is shut off. Another variation of NVARM combines SRAM and EEPROM so that its content is written into the EEPROM when power is shut off and is read back from the EEPROM when power is restored.

Prefferred coding languages for embedded systems

C gives embedded programmers an extraordinary degree of direct hardware control without sacrificing the benefits of high-level languages.Few popular high-level languages can compete with C in the production of compact, efficient code for almost all processors. And, of these, only C allows programmers to interact with the underlying hardware so easily.

C++ is an object-oriented superset of C that is increasingly popular among embedded programmers. All of the core language features are the same as C, but C++ adds new functionality for better data abstraction and a more object-oriented style of programming. These new features are very helpful to software developers, but some of them do reduce the efficiency of the executable program. So C++ tends to be most popular with large development teams, where the benefits to developers outweigh the loss of program efficiency.

Ada is also an object-oriented language, though it is substantially different than C++. Ada was originally designed by the U.S. Department of Defense for the development of mission-critical military software. Despite being twice accepted as an international standard (Ada83 and Ada95), it has not gained much of a foothold outside of the defense and aerospace industries. And it is losing ground there in recent years. This is unfortunate because the Ada language has many features that would simplify embedded software development if used instead of C++.

Tuesday, July 7, 2009

Role of Linker and Loader.

A compiler can be viewed as a program that accepts a source code (such as a Java program) and generates machine code for some computer architecture.

Soon after The compiler generates machine code, it is written in an object file.

An object file contains:

–Code (for methods, etc.)
–Variables (e.g., values for global variables)
–Debugging information
–References to code and data that appear elsewhere (e.g., printf)
–Tables for organizing the above.

This file is not executable since it may refer to external symbols (such as system calls).The operating system provides the following utilities to execute the code:

1.linking: A linker takes several object files and libraries as input and produces one executable object file. It retrieves from the input files (and puts them together in the executable object file) the code of all the referenced functions/procedures and it resolves all external references to real addresses. The libraries include the operating sytem libraries, the language-specific libraries, and, maybe, user-created libraries.

"Linking is simply the process of placing the address of a called function into the calling function's code. This is a fundamental software concept."

2.loading: A loader loads an executable object file into memory, initializes the registers, heap, data, etc and starts the execution of the program.

Linkers vs. Loaders:

Linkers and loaders perform various related but conceptually different tasks:

Program Loading: This refers to copying a program image from hard disk to the main memory in order to put the program in a ready-to-run state. In some cases, program loading also might involve allocating storage space or mapping virtual addresses to disk pages.

Relocation:
Compilers and assemblers generate the object code for each input module with a starting address of zero. Relocation is the process of assigning load addresses to different parts of the program by merging all sections of the same type into one section. The code and data section also are adjusted so they point to the correct runtime addresses.

Symbol Resolution: A program is made up of multiple subprograms; reference of one subprogram to another is made through symbols. A linker's job is to resolve the reference by noting the symbol's location and patching the caller's object code.