Module 4
Computing Components
Outline Chapter 5
Chapter 5 Computing Components
5.1 Individual Computer Components
5.2 The Stored-Program Concept
von Neumann Architecture
The Fetch–Execute Cycle
RAM and ROM
Secondary Storage Devices
Touch Screens
5.3 Embedded Systems
5.4 Parallel Architectures
Parallel Computing
Classes of Parallel Hardware
Ethical Issues: Is Privacy a Thing of the Past?
Related FGCU Courses
Lesson
Day One
Chapter questions review
Lab review
Project Review
Hardware Components
Find the manual
Hands-on
Look at an ad for a modern computer
Arithmetic/logic unit (ALU) The computer component that performs arithmetic operations (addition, subtraction, multiplication, and division) and logical operations (comparison of two values)
Register A small storage area in the CPU used to store intermediate values or special data
Academic Store - discount vouchers with edu email address
The components of a von Neumann machine
memory
arithmetic/logic unit (ALU)
input/output units
the control unit
Day Two
Data and instructions to manipulate the data are logically the same and can be stored in the same place
Super Simple CPU (in Canvas)
Opcode - first four bits, instructions
Operand - next 12 bits, different things based on opcode
LDI (immediate, operand is data)
LOD (direct, operand is address of data)
Analogy: Food truck
Example: ATM
Analogy: Making PB&J
Analogy: Restaurant with suppliers
Example: WWW
Project Review
Scratch Application 2
Scratch Awards
Scratch Kahoot
Project Preview
HTML
Create an account on Codecademy
Make a "Conceptual Web Site" diagram for your portfolio web site using Visio (reference Portfolio assignment) Tips
Make a website folder in your COP1500 folder in OneDrive
Copy and paste this into Brackets to start each page: Shortest (useful) HTML5 Document
Get something working on w3schools then incorporate into your site
HTML5 HOME - Paragraphs, CSS, Links, Images, Lists, Classes, Id, Head
ACM KA: Parallel and Distributed Computing (PD)
The past decade has brought explosive growth in multiprocessor computing, including multi-core processors and distributed data centers. As a result, parallel and distributed computing has moved from a largely elective topic to become more of a core component of undergraduate computing curricula. Both parallel and distributed computing entail the logically simultaneous execution of multiple processes, whose operations have the potential to interleave in complex ways. Parallel and distributed computing builds on foundations in many areas, including an understanding of fundamental systems concepts such as concurrency and parallel execution, consistency in state/memory manipulation, and latency. Communication and coordination among processes is rooted in the message-passing and shared-memory models of computing and such algorithmic concepts as atomicity, consensus, and conditional waiting. Achieving speedup in practice requires an understanding of parallel algorithms, strategies for problem decomposition, system architecture, detailed implementation strategies, and performance analysis and tuning. Distributed systems highlight the problems of security and fault tolerance, emphasize the maintenance of replicated state, and introduce additional issues that bridge to computer networking.
Because the terminology of parallel and distributed computing varies among communities, we provide here brief descriptions of the intended senses of a few terms. This list is not exhaustive or definitive, but is provided for the sake of clarity.
Parallelism: Using additional computational resources simultaneously, usually for speedup.
Concurrency: Efficiently and correctly managing concurrent access to resources.
Activity: A computation that may proceed concurrently with others; for example a program, process, thread, or active parallel hardware component.
Atomicity: Rules and properties governing whether an action is observationally indivisible; for example, setting all of the bits in a word, transmitting a single packet, or completing a transaction.
Consensus: Agreement among two or more activities about a given predicate; for example, the value of a counter, the owner of a lock, or the termination of a thread.
Consistency: Rules and properties governing agreement about the values of variables written, or messages produced, by some activities and used by others (thus possibly exhibiting a data race); for example, sequential consistency, stating that the values of all variables in a shared memory parallel program are equivalent to that of a single program performing some interleaving of the memory accesses of these activities.
Multicast: A message sent to possibly many recipients, generally without any constraints about whether some recipients receive the message before others. An event is a multicast message sent to a designated set of listeners or subscribers.
Parallelism Fundamentals
Build upon students’ familiarity with the notion of basic parallel execution—a concept addressed in Systems Fundamentals—to delve into the complicating issues that stem from this notion, such as race conditions and liveness.
KA Topics:
Multiple simultaneous computations
Goals of parallelism (e.g., throughput) versus concurrency (e.g., controlling access to shared resources)
Parallelism, communication, and coordination
Programming constructs for coordinating multiple simultaneous computations
Need for synchronization
Programming errors not found in sequential programming
Data races (simultaneous read/write or write/write of shared state)
Higher-level races (interleavings violating program intention, undesired non-determinism)
Lack of liveness/progress (deadlock, starvation)
KA Learning Outcomes:
Distinguish using computational resources for a faster answer from managing efficient access to a shared resource. (Cross-reference GV/Fundamental Concepts, outcome 5.) [Familiarity]
Distinguish multiple sufficient programming constructs for synchronization that may be interimplementable but have complementary advantages. [Familiarity]
Distinguish data races from higher level races. [Familiarity]
Machine Level Representation of Data
Explain why everything is data, including instructions, in computers. [Familiarity]
Explain the reasons for using alternative formats to represent numerical data. [Familiarity]
Describe how negative integers are stored in sign-magnitude and twos-complement representations. [Familiarity]
Explain how fixed-length number representations affect accuracy and precision. [Familiarity]
Describe the internal representation of non-numeric data, such as characters, strings, records, and arrays. [Familiarity]
Convert numerical data from one format to another. [Usage]
Write simple programs at the assembly/machine level for string processing and manipulation. [Usage]
Assembly Level Machine Organization
Explain the organization of the classical von Neumann machine and its major functional units. [Familiarity]
Describe how an instruction is executed in a classical von Neumann machine, with extensions for threads, multiprocessor synchronization, and SIMD execution. [Familiarity]