3.8 KiB

Input / Output Subsystem

Handles devices that allow the computer system to:

  • Communicate and interact with the outside world

    • Screen, keyboard, printer
  • Store information

!io-subsystem.png

The Control unit

Program stored in memory. (machine language in binary)

Task of control unit is to execute programs by:

  • Fetch: from memory the next instruction
  • Decode: instruction to determine what to do
  • Execute: issuing signals to ALU, memory and IO systems.

Repeats until HALT instruction.

Typical Machine Instructions

MOV: move data from a to b ADD: add numbers SUB: subtract numbers JMP: transfers program control flow to the indicated instruction

!example-machine-lang.png

CPU Time vs IO Time

Total time required to run program is: CPU Time + IO Time

Typical time to read from IO: 20ms Typical time to run instruction: 20ns CPU can execute up to 200 million instructions while a block is read from a hard disk.

Conclusion: While I/O is executed, CPU is idling.

!cpu-vs-io.png

Were waiting a lot on IO time to finish, very inefficient. We can solve this using multitasking!

Multitasking

To speed up computers we overlap CPU time and IO Time.

While a program waits for IO, other programs can use CPU. This solution requires multiple programs to be loaded into memory, hence the name multitasking.

Multitasking overlaps IO time of a program with CPU time of other program.

Multitasking with multiple processors

If computer has a single CPU, programs should take turns in using it. School/Analyse/Periode 2/Week 1 If computer has multiple CPUS, each task can be given a different CPU. - If the number of tasks is more than available cpu's, we still take turns.

when program issues IO command, cpu is granted to the next program.

Concurrent vs Parallel

Concurrent: Fast switching from a program to the next program (context switch) can create the illusion that they are being executed at the same time. -> Logically Simultaneous

Parallel: If the computer has multiple CPU's or Cores, the programs run in parallel. -> Physically simultaneous

!conc-parallel.png

Sharing Resources

Sometimes different programs (or same program) may share resources.

If data is sequentially accessible, programs should take turns accessing resource/data.

Fast shared resource access can create concurrent access.

Concurrency

Unlike parallelism, concurrency is not always about running faster.

  • Single cpu/core computers may also use concurrency.

Useful for:

  • App responsiveness.
  • Processor utilization (hide IO time)
  • Failure isolation (interleaving multiple tasks, a exception on one task will not bring down the rest)

Abstraction in Concurrency

Concurrent program consists of a finite set of (sequential) processes.

Processes are written using a finite set of statements.

Execution of concurrent program proceeds by executing a sequence of the statements obtained by arbitrarily interleaving the statements from the processes.

!abstraction-in-conc.png

p1->q2->p2->q1 is not possible. because we are breaking the order of execution of program q. q must execute q1 before q2.

Atomic statement

Atomic statement model assumes that a statement is executed to completion without the possibility of interleaving statements from other process.

Main property of atomic statement is: they cant be divided.

A single machine level instruction is always atomic.

Tracing & testing

concurrent programs are hard to develop and test.

Concurrency bugs:

  • Simultaneous access same db record
  • Atomicity violatitions
  • Deadlocks

These bugs are hard to detect and fix.

Correctness

Sequetial programs wil always give the same result. Debugging makes sense.

Concurrent programs do not behave like this. Some instances may give correct output and some not.