Module 4

Computing Components

Outline Chapter 5

Chapter 5 Computing Components

5.1 Individual Computer Components

5.2 The Stored-Program Concept

von Neumann Architecture

The Fetch–Execute Cycle

RAM and ROM

Secondary Storage Devices

Touch Screens

5.3 Embedded Systems

5.4 Parallel Architectures

Parallel Computing

Classes of Parallel Hardware

Ethical Issues: Is Privacy a Thing of the Past?


Lesson

Day One


Day Two

  • Data and instructions to manipulate the data are logically the same and can be stored in the same place

  • Super Simple CPU (in Canvas)

    • Opcode - first four bits, instructions

    • Operand - next 12 bits, different things based on opcode

    • LDI (immediate, operand is data)

    • LOD (direct, operand is address of data)

  • Embedded Systems

    • Analogy: Food truck

    • Example: ATM

  • Parallel Architectures

    • Analogy: Making PB&J

  • Distributed Architectures


Project Review

Scratch Application 2

  • Scratch Awards

  • Scratch Kahoot


Project Preview

HTML

  • Create an account on Codecademy

  • Make a "Conceptual Web Site" diagram for your portfolio web site using Visio (reference Portfolio assignment) Tips

  • Make a website folder in your COP1500 folder in OneDrive

  • Copy and paste this into Brackets to start each page: Shortest (useful) HTML5 Document

  • Get something working on w3schools then incorporate into your site

  • HTML5 HOME - Paragraphs, CSS, Links, Images, Lists, Classes, Id, Head

ACM KA: Parallel and Distributed Computing (PD)

The past decade has brought explosive growth in multiprocessor computing, including multi-core processors and distributed data centers. As a result, parallel and distributed computing has moved from a largely elective topic to become more of a core component of undergraduate computing curricula. Both parallel and distributed computing entail the logically simultaneous execution of multiple processes, whose operations have the potential to interleave in complex ways. Parallel and distributed computing builds on foundations in many areas, including an understanding of fundamental systems concepts such as concurrency and parallel execution, consistency in state/memory manipulation, and latency. Communication and coordination among processes is rooted in the message-passing and shared-memory models of computing and such algorithmic concepts as atomicity, consensus, and conditional waiting. Achieving speedup in practice requires an understanding of parallel algorithms, strategies for problem decomposition, system architecture, detailed implementation strategies, and performance analysis and tuning. Distributed systems highlight the problems of security and fault tolerance, emphasize the maintenance of replicated state, and introduce additional issues that bridge to computer networking.

Because the terminology of parallel and distributed computing varies among communities, we provide here brief descriptions of the intended senses of a few terms. This list is not exhaustive or definitive, but is provided for the sake of clarity.

  • Parallelism: Using additional computational resources simultaneously, usually for speedup.

  • Concurrency: Efficiently and correctly managing concurrent access to resources.

  • Activity: A computation that may proceed concurrently with others; for example a program, process, thread, or active parallel hardware component.

  • Atomicity: Rules and properties governing whether an action is observationally indivisible; for example, setting all of the bits in a word, transmitting a single packet, or completing a transaction.

  • Consensus: Agreement among two or more activities about a given predicate; for example, the value of a counter, the owner of a lock, or the termination of a thread.

  • Consistency: Rules and properties governing agreement about the values of variables written, or messages produced, by some activities and used by others (thus possibly exhibiting a data race); for example, sequential consistency, stating that the values of all variables in a shared memory parallel program are equivalent to that of a single program performing some interleaving of the memory accesses of these activities.

  • Multicast: A message sent to possibly many recipients, generally without any constraints about whether some recipients receive the message before others. An event is a multicast message sent to a designated set of listeners or subscribers.

Parallelism Fundamentals

Build upon students’ familiarity with the notion of basic parallel execution—a concept addressed in Systems Fundamentals—to delve into the complicating issues that stem from this notion, such as race conditions and liveness.

KA Topics:

  • Multiple simultaneous computations

  • Goals of parallelism (e.g., throughput) versus concurrency (e.g., controlling access to shared resources)

  • Parallelism, communication, and coordination

    • Programming constructs for coordinating multiple simultaneous computations

    • Need for synchronization

  • Programming errors not found in sequential programming

    • Data races (simultaneous read/write or write/write of shared state)

    • Higher-level races (interleavings violating program intention, undesired non-determinism)

    • Lack of liveness/progress (deadlock, starvation)


KA Learning Outcomes:

  1. Distinguish using computational resources for a faster answer from managing efficient access to a shared resource. (Cross-reference GV/Fundamental Concepts, outcome 5.) [Familiarity]

  2. Distinguish multiple sufficient programming constructs for synchronization that may be interimplementable but have complementary advantages. [Familiarity]

  3. Distinguish data races from higher level races. [Familiarity]

Machine Level Representation of Data

  1. Explain why everything is data, including instructions, in computers. [Familiarity]

  2. Explain the reasons for using alternative formats to represent numerical data. [Familiarity]

  3. Describe how negative integers are stored in sign-magnitude and twos-complement representations. [Familiarity]

  4. Explain how fixed-length number representations affect accuracy and precision. [Familiarity]

  5. Describe the internal representation of non-numeric data, such as characters, strings, records, and arrays. [Familiarity]

  6. Convert numerical data from one format to another. [Usage]

  7. Write simple programs at the assembly/machine level for string processing and manipulation. [Usage]

Assembly Level Machine Organization

  1. Explain the organization of the classical von Neumann machine and its major functional units. [Familiarity]

  2. Describe how an instruction is executed in a classical von Neumann machine, with extensions for threads, multiprocessor synchronization, and SIMD execution. [Familiarity]