# Input / Output Subsystem Handles devices that allow the computer system to: * Communicate and interact with the outside world - Screen, keyboard, printer * Store information ![[io-subsystem.png]] # The Control unit Program stored in memory. (machine language in binary) Task of control unit is to execute programs by: * Fetch: from memory the next instruction * Decode: instruction to determine what to do * Execute: issuing signals to ALU, memory and IO systems. Repeats until HALT instruction. # Typical Machine Instructions MOV: move data from a to b ADD: add numbers SUB: subtract numbers JMP: transfers program control flow to the indicated instruction ![[example-machine-lang.png]] # CPU Time vs IO Time Total time required to run program is: CPU Time + IO Time Typical time to read from IO: 20ms Typical time to run instruction: 20ns CPU can execute up to 200 million instructions while a block is read from a hard disk. Conclusion: While I/O is executed, CPU is idling. ![[cpu-vs-io.png]] Were waiting a lot on IO time to finish, very inefficient. We can solve this using multitasking! # Multitasking To speed up computers we overlap CPU time and IO Time. While a program waits for IO, other programs can use CPU. This solution requires multiple programs to be loaded into memory, hence the name multitasking. Multitasking overlaps IO time of a program with CPU time of other program. # Multitasking with multiple processors If computer has a single CPU, programs should take turns in using it. If computer has multiple CPUS, each task can be given a different CPU. - If the number of tasks is more than available cpu's, we still take turns. when program issues IO command, cpu is granted to the next program. # Concurrent vs Parallel **Concurrent**: Fast switching from a program to the next program (context switch) can create the illusion that they are being executed at the same time. -> **Logically Simultaneous** **Parallel**: If the computer has multiple CPU's or Cores, the programs run in parallel. -> **Physically simultaneous** ![[conc-parallel.png]] # Sharing Resources Sometimes different programs (or same program) may share resources. If data is **sequentially** accessible, programs should take turns accessing resource/data. Fast shared resource access can create **concurrent** access. # Concurrency Unlike parallelism, concurrency is not always about running faster. - Single cpu/core computers may also use concurrency. Useful for: - App responsiveness. - Processor utilization (hide IO time) - Failure isolation