Welcome to graduate2professional.blogspot.com

Wednesday, June 24, 2009

DMA (Direct Memory Access) and its Cycle stealing

Direct Memory Access:

  1. Blocks of data are transferred between an external device and the main memory, without continuous intervention by the processor.
  2. A DMA controller temporarily
    a.borrows the address bus, data bus, and control bus from the microprocessor and
    b.transfers the data bytes directly between an I/O port and a series of memory locations.
  3. Two control signals are used to request and acknowledge a DMA transfer in the microprocessor-based system.
  4. The HOLD signal is a bus request signal which asks the microprocessor to release control of the buses after the current bus cycle.
  5. The HLDA signal is a bus grant signal which indicates that the microprocessor has indeed released control of its buses by placing the buses at their high-impedance states.
  6. The HOLD input has a higher priority than the INTR or NMI interrupt inputs.
  7. OS is also responsible for suspending the execution of one program and starting another.
    –OS puts the program that requested the transfer in the Blocked state,
    –initiates the DMA operation,
    –starts execution of another program.
  8. When the transfer is complete, the DMA controller informs the processor by sending an interrupt request.
  9. OS puts suspended program in the Runnable state so that it can be selected by the scheduler to continue execution.

Cycle Stealing:

  1. Requests by DMA devices for using the bus are alwas given higher priority than processor requests.
  2. Among different DMA devices, top priority is given to high-speed peripherals (disks, high-speed network interface, graphics display device).
  3. Since the processor initiates most memory access cycles, it is often stated that DMA steals memory cycles from the processor (cycle stealing) for its purpose.
  4. If DMA controller is given exclusive access to the main memory to transfer a block of data without interruption, this is called block or burst mode.

Searching file on Linux machine

find:
Command used to find the files on linux environment.

Syntax goes like this
find [starting point] [search criteria] [action]

Some of the ways to search files listed below for reference:

1.Basic usage:
find . -name “*.jpg”

Explanation: find is the command, the dot ‘.‘ means start from the current directory, and the -name “*.jpg” tells find to search for files with .jpg in the name. The * is a wild card.

2.Find all css files in the ‘/var/www‘ directory:

find /var/www -name “*.css” -print

3.Find Files by Size

a.Find all ‘.txt‘ files that are less than 100kb in size.

find . -name *.txt -size -100k -ls

b.Find Files over a GB in size

find ~/Movies -size +1024M

c.Find all files that are over 40kb in size.

find . -size +40k -ls

4.Find and remove Files:

The power comes when you want to apply an action with the search. This command will find all css files and then remove them.

find . -name “*.css”-exec rm -rf {} \;


It is worth noting that find is recursive so be very careful when using the ‘-exec‘ flag. You could accidentally delete all css files in your computer if you are in the wrong directory. It is always a good idea to run find by itself before adding the -exec flag.

Friday, June 5, 2009

What are Exceptions and Interrupts?

An exception is any event that disrupts the normal execution of the processor and forces the processor into execution of special instructions in a privileged state. Exceptions can be classified into two categories: synchronous exceptions and asynchronous exceptions.
Exceptions raised by internal events, such as events generated by the execution of processor instructions, are called synchronous exceptions.
Examples of synchronous exceptions include the following:
1.On some processor architectures, the read and the write operations must start at an even memory address for certain data sizes. Read or write operations that begin at an odd memory address cause a memory access error event and raise an exception (called an alignment exception ).
2.An arithmetic operation that results in a division by zero raises an exception.

Exceptions raised by external events, which are events that do not relate to the execution of processor instructions, are called asynchronous exceptions. In general, these external events are associated with hardware signals. The sources of these hardware signals are typically external hardware devices.
Examples of asynchronous exceptions include the following:
1.Pushing the reset button on the embedded board triggers an asynchronous exception (called the system reset exception ).
2.The communications processor module that has become an integral part of many embedded designs is another example of an external device that can raise asynchronous exceptions when it receives data packets.

An interrupt, sometimes called an external interrupt, is an asynchronous exception triggered by an event that an external hardware device generates. Interrupts are one class of exception. What differentiates interrupts from other types of exceptions, or more precisely what differentiates synchronous exceptions from asynchronous exceptions, is the source of the event.
The event source for a synchronous exception is internally generated from the processor due to the execution of some instruction. On the other hand, the event source for an asynchronous exception is an external hardware device.
Applications of Exceptions and Interrupts :
In general, exceptions and interrupts help the embedded engineer in three areas:
1.internal errors and special conditions management,
2.hardware concurrency, and
3.service requests management.

For example, an embedded application running on a core processor issues work commands to a device. The embedded application continues execution, performing other functions while the device tries to complete the work issued. After the work is complete, the device triggers an external interrupt to the core processor, which indicates that the device is now ready to accept more commands. This method of hardware concurrency and use of external interrupts is common in embedded design.

Wednesday, June 3, 2009

Difference between Symbolic link and Hard link?

Linux has two kinds of file system links: symbolic links andhard links.

A symbolic link — also called a soft link or symlink —resembles a Windows shortcut. A symlink is a little file that contains the pathname of another object on thefilesystem: a file, a directory, a socket, and so on —possibly even the pathname of another link. This pathnamecan be absolute or relative. To make a symlink, use ln withthe -s option. Give the name of the target first, then thename of the link.

# ln –s existing-file-name link-name

We can still edit the original file by opening the symboliclink, and changes we make doing that will "stick." But if wedelete the symbolic link, it has no impact on the originalfile at all. If we move or rename the original file, the symbolic link is "broken," it continues to exist but it points at nothing.

What happens if I edit the link?
Any modifications to the linked file will be changed on the original file.
What happens if I delete the link?
If you delete the link the original file is unchanged. It will still exist.
What happens if I delete the original file but not the link?
The link will remain but will point to a file that does not exist. This is called an orphaned or dangling link.



A hard link isn’t itself a file. Instead, it’s a directoryentry. It points to another file using the file’s inodenumber. Means both have same inode number. Any changes tothe original file will get reflected in the link file alsoas both are same.

# ln existing-file-name link-name

To give a file more than one name or to make the same fileappear in multiple directories, you can make links to thatfile instead of copying it. One advantage of this is that alink takes little or even no disk space. Another is that, ifyou edit the target of the link, those changes can be seenimmediately through the link.


The Difference Between Soft and Hard Links
Hard links
Only link to a file not a directory
Can not reference a file on a different disk/volume
Links will reference a file even if it is moved
Links reference inode/physical locations on the disk

Symbolic (soft) links
Can link to directories
Can reference a file/folder on a different hard disk/volume
Links remain if the original file is deleted
Links will NOT reference the file anymore if it is moved
Links reference abstract filenames/directories and NOT physical locations. They are given their own inode

Linux_Interprocess Communication Mechanisms

Interprocess Communication Mechanisms:
Processes communicate with each other and with the kernel to coordinate their activities. Linux supports a number of Inter-Process Communication (IPC) mechanisms.
Signals:
Signals are one of the oldest inter-process communication methods used by Unix TM systems. They are used to signal asynchronous events to one or more processes. A signal could be generated by a keyboard interrupt or an error condition such as the process attempting to access a non-existent location in its virtual memory. Signals are also used by the shells to signal job control commands to their child processes.
Pipes and FIFOS(Nmaed Pipes):
A Pipe is a method of connecting the standard output of one process to the
standard input of another.
This feature is widely used, even on the UNIX command line (in the shell).
ls sort lp

Pipes and FIFOs (also known as named pipes) provide a unidirectional interprocess communication channel. A pipe has a read end and a write end. Data written to the write end of a pipe can be read from the read end of the pipe.
A pipe is created using pipe(2), which creates a new pipe and returns two file descriptors, one referring to the read end of the pipe, the other referring to the write end. Pipes can be used to create a communication channel between related processes; see pipe(2) for an example.

A FIFO (short for First In First Out) has a name within the file system (created using mkfifo(3)), and is opened using open(2). Any process may open a FIFO, assuming the file permissions allow it. The read end is opened using the O_RDONLY flag; the write end is opened using the O_WRONLY flag.

Shared Memory:

To transfer large amount of data between kernel and user process, shared memory is provided.
mbuff driver used for shared memory. Any real-time or kernel task or user process can access this memory at any time.

Semaphores:

A semaphore is like a key that allows a task to carry out some operation or to access a resource. If the task can acquire the semaphore, it can carry out the intended operation or access the resource.

A kernel can support many different types of semaphores, including 1.Binary and 2.Counting semaphores
Binary Semaphores :

A binary semaphore can have a value of either 0 or 1. When a binary semaphore’s value is 0, the semaphore is considered unavailable (or empty); when the value is 1, the binary semaphore is considered available (or full ).

Counting Semaphores :

A counting semaphore uses a count to allow it to be acquired or released multiple times. When creating a counting semaphore, assign the semaphore a count that denotes the number of semaphore tokens it has initially.

One or more tasks can continue to acquire a token from the counting semaphore until no tokens are left. When all the tokens are gone, the count equals 0, and the counting semaphore moves from the available state to the unavailable state. To move from the unavailable state back to the available state, a semaphore token must be released by any task.
Note that, as with binary semaphores, counting semaphores are global resources that can be shared by all tasks that need them. This feature allows any task to release a counting semaphore token. Each release operation increments the count by one, even if the task making this call did not acquire a token in the first place.

Message Queues:

A message queue is a buffer-like object through which tasks and ISRs send and receive messages to communicate and synchornize with data. A message queue is like a pipeline. It temporarily holds messages from a sender until the intended receiver is ready to read them. This temporary buffering decouples a sending and receiving task; that is, it frees the tasks from having to send and receive messages simultaneously.

The message queue itself consists of a number of elements, each of which can hold a single message. The elements holding the first and last messages are called the head and tail respectively. Some elements of the queue may be empty (not containing a message). The total number of elements (empty or not) in the queue is the total length of the queue



Monday, June 1, 2009

Macro Delay

1.Macro way of introducing a 10micro sec delay in C:

#define Nop() {__asm ("nop");}
// 1Nop = 1US

#define DELAY_10_US() \

Nop(); \

Nop(); \

Nop(); \

Nop(); \

Nop(); \

Nop(); \

Nop(); \

Nop(); \

Nop(); \

Nop()

What is an Interrupt Service Routine?


  1. An Interrupt service routine (ISR), also known as an Interrupt handler.

  2. These handlers are initiated by either hardware interrupts or interrupt instructions in software, and are used for servicing hardware devices and transitions between protected modes of operation such as system calls.

For Example:Here is a scenario where the hardware does not support identifying the device that initiated the interrupt. In such cases, the possible interrupting devices need to be polled in software:

1.A device asserts the interrupt signal at a hardwired interrupt level.


2.The processor registers the interrupt and waits to finish the current instruction execution.

3.Once the current instruction execution is completed, the processor initiates the interrupt handling by saving the current register contents on the stack.

4.The processor then switches to supervisor mode and initiates an interrupt acknowledge cycle.

5.The interrupting device responds to the interrupt acknowledge cycle with the vector number for the interrupt.

6.Processor uses the vector number obtained above and fetches the vector.

7.The address found at the vector is the address of the interrupt service routine (ISR) for the interrupting device.


8.After the ISR routine has performed its job, the ISR executes the "return from interrupt" instruction.

9.Execution of the "return from interrupt" instruction results in restoring the processor state. The processor is restored back to user mode.

FAQs on Vxworks tasks

Q: What is a task in VxWorks?
A: A task is an independent program with its own thread of execution and execution context. VxWorks uses a single common address space for all tasks thus avoiding virtual-to-physical memory mapping. Every task contains a structure called the task control block that is responsible for managing the task's context.
Q: How do task's manage the context of execution ?
A: Every task contains a structure called the task control block that is responsible for managing the task's context. A task's context includes
· program counter or thread of execution· CPU registers· Stack of dynamic variables and function calls· Signal handlers· IO assignments· Kernel control structures etc.,
Q: What are the Different states of tasks?
A task has 4 states. Read, Pend, Delay, Suspended
Q: Explain the task State transition ?
A: A task can be created with taskInit() and then activated with taskActivate() routine or both these actions can be performed in a single step using taskSpawn(). Once a task is created it is set to the suspend state and suspended until it is activated, after which it is added to the ready queue to be picked up by the scheduler and run. A task may be suspended by either the debugging your task, or the occurrence an exception. The difference between the pend and suspend states is that a task pends when it is waiting for a resource. A task that is put to sleep is added to delay queue.

FAQs on semaphore/Mutex inVXworks

Q: What are the different types of semaphores in vxworks? Which is the fastest?

A: VxWorks supports three types of semaphores. Binary, mutual exclusion, and counting semaphores. Binary is the fastest semaphore.


Q: When will you use binary semaphore ?

A: Binary semaphores are used for basic task synchronization and communication.


Q: When will you use mutual exclusion semaphore?

A: Mutual exclusion semaphores are sophisticated binary semaphores that are used to address the issues relating to task priority inversion and semaphore deletion in a multitasking environment.


Q: When will you use Counting semaphore?

A: Counting semaphores maintain a count of the number of times a resource is given. This is useful when an action is required for each event occurrence. For example if you have ten buffers, and multiple tasks can grab and release the buffers, then you want to limit the access to this buffer pool using a counting semaphore.


Q: What is the difference between Mutex and binary semaphore?

A: The differences are:

1) Mutex can be used only for mutual exclusion, while binary can be used of mutual exclusion as well as synchronisation.

2) Mutex can be given only by the task that took it.

3) Mutex cannot be given from an ISR.4) Mutual-exclusion semaphores can be taken recursively. This means that the semaphore can be taken more than once by the task that holds it before finally being released.5) Mutex provides a options for making the task that took it as DELETE_SAFE. This means, that the task cannot be deleted when it holds mutex.