1 / 13

Embedded Real-time Systems

Embedded Real-time Systems. The Linux kernel. The Operating System Kernel. Resident in memory, privileged mode System calls offer general purpose services Controls and mediates access to hardware Implements and supports fundamental abstractions:

mmccabe
Télécharger la présentation

Embedded Real-time Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Embedded Real-time Systems The Linux kernel

  2. The Operating System Kernel • Resident in memory, privileged mode • System calls offer general purpose services • Controls and mediates access to hardware • Implements and supports fundamental abstractions: • Process, file (file system, devices, interprocess communication) • Schedules / allocates system resources: • CPU, memory, disk, devices, etc. • Enforces security and protection • Event driven: • Responds to user requests for service (system calls) • Attends interrupts and exceptions • Context switch at quantum time expiration

  3. What is required? • Applications need an execution environment: • Portability, standard interfaces • File and device controlled access • Preemptive multitasking • Virtual memory (protected memory, paging) • Shared libraries • Shared copy-on-write executables • TCP/IP networking • SMP support • Hardware developers need to integrate new devices • Standard framework to write device drivers • Layered architecture dependent and independent code • Extensibility, dynamic kernel modules

  4. Linux Execution Environment • Program • Libraries • Kernel subsystems

  5. Linux Execution Environment • Execution paths

  6. Linux source code

  7. Linux source code layout • Linux/arch • Architecture dependent code. • Highly-optimized common utility routines such as memcpy • Linux/drivers • Largest amount of code • Device, bus, platform and general directories • Character and block devices , network, video • Buses – pci, agp, usb, pcmcia, scsi, etc • Linux/fs • Virtual file system (VFS) framework. • Actual file systems: • Disk format: ext2, ext3, fat, RAID, journaling, etc • But also in-memory file systems: RAM, Flash, ROM

  8. Linux source code layout • Linux/include • Architecture-dependent include subdirectories. • Need to be included to compile your driver code: • gcc … -I/<kernel-source-tree>/include … • Kernel-only portions are guarded by #ifdefs #ifdef __KERNEL__/* kernel stuff */ #endif • Specific directories: asm, math-emu, net, pcmcia, scsi, video.

  9. Process and System Calls • Process: program in execution. Unique “pid”. Hierarchy. • User address space vs. kernel address space • Application requests OS services through TRAP mechanism • x86: syscall number in eaxregister, exception (int $0x80) • result = read (file descriptor, user buffer, amount in bytes) • Read returns real amount of bytes transferred or error code (<0) • Kernel has access to kernel address space (code, data, and device ports and memory), and to user address space, but only to the process that is currently running • “Current” process descriptor. “currentpid” points to current pid • Two stacks per process: user stack and kernel stack • Special instructions to copy parameters / results between user and kernel space

  10. Scheduling processes • Process scheduling is done at the following events: • 1. the running process switches to the waiting state, • 2. the running process switches to the ready state, • 3. a waiting process switches to the ready state (the woke up process may have higher priority than the previously running process), or • 4. a process terminates. • If scheduling occurs only at 1 and 4, we have non-preemptive scheduli

  11. Preemptive vs non-preemptive scheduling • With preemptive scheduling, another process can be scheduled at any time. • A process which is updating shared data must ensure that no other process also starts using the data (by using a lock for instance). • The first UNIX kernels were non-preemptive which simplified their design, but user processes where scheduled preemptively. • With multiprocessors, UNIX kernels were rewritten with preemptive kernel Process scheduling

  12. Scheduling criteria 1 • CPU utilisation: how busy the CPU is with user processes, • Throughput: number of processes completed per time unit, • Turnaround time: how long time it takes for one process to execute, • Waiting time: how long time a process sits in the ready queue, and • Response time: how long time it takes before a user gets some response (must be very small — otherwise too annoying). • These goals are contradictory, unfortunately

  13. Scheduling criteria 2 • A short response time requires frequent re-scheduling. • Context-switching time can be several percent and compute-intensive jobs are best run to completion with as little scheduling as possible. • For instance, running two long compute-intensive simulations after each other is better on UNIX than running the concurrently (despite losing positive effects of multiprogramming). • In addition to context-switching time, frequent rescheduling increases the number of cache misses.

More Related