Browse Course Material

Course info, instructors.

  • Prof. Eric Grimson
  • Prof. John Guttag

Departments

  • Electrical Engineering and Computer Science

As Taught In

  • Programming Languages

Introduction to Computer Science and Programming

Assignments.

facebook

You are leaving MIT OpenCourseWare

Computer Fundamentals Tutorial

  • Computer Fundamentals
  • Computer - Home
  • Computer - Fundamentals
  • Computer - Overview
  • Computer - Basic Introduction
  • Computer - Advantages & Disadvantages
  • Computer - Classification
  • Computer - Applications
  • Computer - History and Evolution of Computers
  • Computer - History
  • Computer - System Characteristics
  • Computer - Origins
  • Computer - Generations
  • Computer - Types
  • Computer - Components
  • Computer - CPU
  • Computer - Input Devices
  • Computer - Output Devices
  • Computer - Memory Units
  • Computer - Arithmetic Logic Unit (ALU)
  • Computer - Hardware
  • Computer - Motherboard
  • Computer - RAM
  • Computer - Read Only Memory
  • Computer - GPU
  • Computer - PSU
  • Computer - Graphics Processing Unit (GPU)
  • Computer - Power Supply Unit (PSU)
  • Computer - Peripherals
  • Computer - Software
  • Computer - Software Types
  • Computer - NAND Flash Memory
  • Computer - NVMe Memory
  • Computer - CompactFlash Card
  • Computer - Application Software
  • Computer - System Software
  • Computer - Utility Software
  • Computer - Open Source Software
  • Computer - Data Storage and Memory
  • Computer - Storage Devices
  • Computer - Types of Data Storage
  • Computer - Hard Disk Drives (HDD)
  • Computer - Solid State Drives (SSD)
  • Computer - Optical Storage
  • Computer - USB Flash Drives
  • Computer - Memory Cards
  • Computer - Cloud Storage
  • Computer - Memory
  • Computer - Memory Classification
  • Computer - Register Memory
  • Computer - Cache Memory
  • Computer - Primary Memory
  • Computer - Secondary Memory
  • Computer - Video Random-Access Memory (VRAM)
  • Computer - Internet and Intranet
  • Computer - Internet
  • Computer - Interanet
  • Computer - Internet Access Techniques
  • Computer - Internet Etiquettes
  • Computer - Extranet
  • Computer - Websites
  • Computer - Search Engines
  • Computer - Web Browsing Software
  • Computer - Word Processors
  • Computer - Spread Sheet
  • Computer - Database Systems
  • Computer - Power Presentations
  • Computer - E-mail Tools
  • Computer - Ports
  • Computer - Number System
  • Computer - Number Conversion
  • Computer - Data and Information
  • Computer - Networking
  • Computer - Operating System
  • Computer - Programming Languages
  • Computer - Keyboard Shortcut Keys
  • Computer - Antivirus
  • Computer - Virus
  • Computer - Applications of IECT
  • Computer - Data Processing Stages
  • Computer - Data Representation
  • Computer - Communication and Collaboration
  • Computer - Digital Financial Services
  • Computer - Domain Specific Tools
  • Computer - Microprocessor Concepts
  • Computer - How to Buy?
  • Computer - Available Courses
  • Computer Useful Resources
  • Computer - Quick Guide
  • Computer - Useful Resources
  • Computer - Discussion
  • Selected Reading
  • UPSC IAS Exams Notes
  • Developer's Best Practices
  • Questions and Answers
  • Effective Resume Writing
  • HR Interview Questions
  • Computer Glossary

Computer Fundamentals - Data and Information

What is data.

Data is a raw material; it’s a collection of facts and figures. Data does not have a significant meaning because of its raw nature. Data may include text, figures, facts, images, numbers, graphs, and symbols and it can be generated from different sources like sensors, surveys, transactions, social media etc.

G15, KPL, and Gud are some examples of data. Data needs to be processed to convert into a useful manner which is known as information. For example – Gud is data; after text processing, it converts into Good which is information.

Data

  • Raw material
  • Unstructured information
  • It has no context
  • Processed Data
  • Structured information
  • It has context

A proper analysis of data plays an important role in fields like research, science, business, healthcare, agriculture, and technology, driving decision-making and innovation.

Characteristics of Data

Some characteristics of different types of data are as follows −

Type of Data Characteristics
Quantitative Data
Qualitative / Descriptive Data
Structured Data
Unstructured Data
Big Data
Metadata
Streaming Data

Types of Data

Types of Data
Quantitative data It's available in numerical form, like 50 Kg, 165 cm, 15887 etc.
Discrete Data Data that take certain values like whole numbers. For example, the number of employees in a department.
Continuous Data Data that can take any value within a range. For example, wind speed, and temperature. For example - Over time, certain continuous data, such as the weight of the baby over the year changes or the temperature in the room during the day changes.
Qualitative data It's available in a descriptive form for example name, gender, address, and features of a person.
Nominal Data Data that represents categories with no inherent order. For example, colours, and gender.
Ordinal Data Data that represents categories with a specific order or ranking. For example, ranking satisfaction levels as "poor," "average," or "excellent."
Categorical Data The data which represents categories or labels and is often qualitative is called categorical data. It can include nominal and ordinal data.
Numerical Data This type of data includes numbers. It can be either quantitative or qualitative.
Time Series Data Data collected over time intervals like stock prices, weather data, and sales figures.
Spatial Data Data associated with geographic locations like Google maps, GPS data, and satellite images.

What is Information?

Information is processed data. It is always useful and used in decision-making. A person who has a lot of information about a particular thing is always considered a knowledgeable person. Hence, a good information base always makes a good knowledge base and a good knowledge base helps to make healthy or fruitful decisions.

Characteristics of Information

General Characteristics of Information are as follows −

  • It is effective and complete to make decisions.
  • True information is broad in scope.
  • Information relates to the current situation and has an acceptable level of integrity.
  • Information is always compatible with response time.
  • Information is concise and does not contain delicacy.
  • Information is precise and accurate.
  • Information is always relevant.
  • Information can be verifiable.
  • Information contains facts; that can be shared for making fruitful decisions.
  • Information is organised and stored for future reference.

Differences Between Data vs Information

S.No Data Information
1 Data is a raw material It's processed data
2 It is meaningless It is meaningful
3 Is not use in decision-making Uses in decision-making
4 Data does not rely on information The information relies on data
5 Data is a collection of facts Information kept facts in context
6 Data is unorganized Information is organized
7 Data is represented in the form of graphs, numbers, figures, or statistics Information is presented in the form of words, language, thoughts, and ideas.
8 Data does not have context Information has context
9 It can be considered as a single unit that is unprocessed It is a product and a collection of data
10 It is measured in bytes and bits. It is measured using meaningful units like concerning quantity and time

Talk to our experts

1800-120-456-456

  • Introduction to Computer
  • Computer Science

ffImage

What is a Computer?

A computer is an electronic machine that processes raw data and outputs information. An electronic device that takes data as input and transforms it using a set of special instructions known as Programs to produce the desired output. A computer has an internal memory that stores data and instructions that are temporarily awaiting processing, as well as the intermediate result (information) before it is communicated to the recipients via the Output devices

Computer

What Does the Computer Require in Order to be Operational?

A Computer requires hardware devices and an operating system in order to be operational.

1. Hardware Devices

Monitor: It is a big television-like screen. It is an output device where you see what is happening on the computer.

Keyboard: It is an input device. It is a way of giving commands to a computer with the help of keys over it.

Central Processing Unit (CPU): It is a processing unit.It is considered the brain of the computer as it can’t perform any activity without CPU.

Mouse: It is an input device. This is the alternate method for cooperating with your PC. Most mice have two buttons — a right and a left button — and a looking over wheel.

Hardware Devices

Hardware Devices

2. Operating System (OS)

Operating System

Operating System

PCs without an OS are precisely similar to TVs without a signal. They will turn on, yet you will be checking a clear screen out without any desire to collaborate with it. The most famous working framework is "Microsoft Windows," and it is used by most PC.

The OS acts as the sensory system of the PC, interfacing the computer processor to all the PC programs. The OS permits you to run other programs, work on projects, and do essentially all the other things that PCs are prepared to do.

There are a wide range of renditions of Microsoft Windows, and a new adaptation is delivered every several years.

How to Operate a Computer

There are three states in which a computer is at any given time.

OFF : This is precisely the exact thing it seems like: The PC is off, and no parts are running or working. The screen is dark (no pictures), there is no "humming" sound from the central processor, and the PC is inert to mouse developments or pressing keys on the keyboard. 

ON : When a PC is on, you ought to see pictures on the screen, conceivably hear a "buzzing" commotion coming from the central processor and the pointer on the screen ought to answer when you move the mouse.

Rest Mode : Most PCs have a mode called "Rest," in which the PC is on, yet has expected an energy-productive, insignificant power mode. To "wake" the PC, basically move the mouse around or press the spacebar on the console, and it will "awaken" and return to the identical spot that it was at the point at which it fell asleep.

Signing On Screen

Signing on Screen

When you turn the PC on, the PC will go through a progression of mechanized undertakings before it is prepared for you to associate with it; this cycle is called "startup." This cycle will endure somewhere in the range of one and two minutes. Assuming the PC is not working accurately, you might see a blunder message during startup.

Desktop

After you sign on, the PC will show what is known as your work area inside a couple of moments to a couple of moments. Here you will see a computerized portrayal of something almost identical to real-life office space, complete with a work area, documents and record organizers, and a recycling bin.

Features of Computer

Below mentioned are some of the features of a computer..

When executing mathematical computations, a computer works significantly faster and more accurately than a human.

Speed of computer

Speed of Computer

Calculations made by computers are always accurate. Data inaccuracy or consistency might lead to errors.

A computer contains internal storage for data called main memory. Data is also stored on removable media like CDs, pen drives, and other types of secondary storage.

Computer Memory

Computer Memory

Reliability

When given the same set of data repeatedly, a computer will consistently provide the same output, demonstrating its dependability.

The computer completes every task automatically, that is, without human interaction.

Computer Automation

Computer Automation

Drawbacks of Computer

Although using a computer has numerous benefits, there are also risks and drawbacks. If used improperly, computers can cause a number of health problems.

The computer is emotionless.

It can't function alone. It requires somebody to work on it and give it instructions.

The computer must be supplied with each command.

No choice can be made by a computer on its own.

What is a Machine?

A machine is a tool that facilitates our job.

It helps us save time and effort.

Humans are not as productive as machines .

Machine Examples Include the Following:

For enjoyment, people use televisions.

Television

To iron the clothes, use an iron box.

Iron Box

An automobile is used for transportation.

Automobile

Calling is done on a mobile device.

Mobile Device

Mobile Device

Points to Remember 

Computer is an electronic machine.

The main components required for a computer are mouse, monitor and  keyboard.

The CPU is also known as the “Brain” of the computer.

OS stands for operating system.

The first screen you see when it starts is called the desktop.

Learning by Doing

Choose the correct answer:.

1. Which part of the computer contains the computer's brains?

B. Keyboard

D. All of above

Write True or False

1. Windows, Linux, and Android are examples of Operating devices(True/False)

2. Keyboard is an Input device. (True/False)

Sample Questions

1. Choose the correct statement

A. Computer is an electronic machine

B. It performs arithmetic operation

C. Both A) and B)

2.  What is an OS? 

Ans: OS stands for operating system.The OS permits you to run other programs, work on projects, and do essentially all the other things that PCs are prepared to do.

3. List various primary parts of the computer.

1. A Motherboard

2. A CPU i.e. Central Processing Unit’

3. RAM i.e. Random Access Memory

5. Hard drives

6. Computer Mouse

The monitor, CPU, keyboard, mouse, printer, sound system, RAM, hard drive, and many other components make up the computer system's hardware. There are various operating systems in computers such as Microsoft Windows, Linux and so on.

arrow-right

FAQs on Introduction to Computer

1. Which OS does Apple use?

An Apple Computer is called a Macintosh (Mac). Its Operating System is OS X while other PCs use windows.

2. Do computers require the Internet to operate?

A computer does not need to access the Internet in order to run properly. The Internet is a way of connecting to other computer users. You can interface with the web utilizing a telephone line, a link association, or by utilizing a remote interfacing gadget (wi-fi). For most home PC clients, this is a paid help, however you can use the Web for free in a few public areas, similar to the library or a café. A PC will actually want to carry out most normal roles (play music, type records, alter pictures) and run programs without a Web association. Notwithstanding, to see a page or send an email, you will require a Web association. 

3. What “My Computer is Possessed!” means?

“My Computer is Possessed!” It is a common misconception that computers have “a mind of their own.” In spite of the fact that PCs can play out specific assignments significantly more effectively and quicker than people (like counting, performing numerical computations, and so on), they are, eventually, machines and can't have an independent mind. Any reasonable person would agree that the PC can do nothing that you don't advise it to do.

New York University

Computer Science Department

Courant Institute of Mathematical Sciences

Course Title: Data Communication & Networks                                                            Course Number: g22.2662-001

Instructor: Jean-Claude Franchitti                                                                                 Session: 4

Assignment #2

I.           Due                   Thurday   March 6, 2008, at the beginning of class.

           

II.          Objectives

  • See protocols in action.

III.        References

  • Slides and handouts posted on the course Web site
  • Textbook chapters as applicable

IV.        Software Required

  • Wireshark Packet Sniffer and Packet Capture Library (see section V below).
  • Microsoft Word.
  • Win Zip as necessary.

V.        Assignment

            Preamble and Disclaimer:

As noted on the corresponding SourceForge site, the Ethereal development team switched names from Ethereal to Wireshark in May 2006 due to trademark issues (see http://www.wireshark.org/faq.html#q1.2 for more details on this). Incidentally, some people pronounce the name Ethereal as “ether-real,” while others pronounce it “e-thir-E-al,” as in the English word ethereal, which means ghostly or insubstantial. The Ethereal name’s origin comes from the Ethernet protocol, a link-level protocol that is studied extensively in Chapter 5 of the textbook, and in the class labs.

1. Wireshark Lab - Getting Started

One’s understanding of network protocols can often be greatly deepened by “seeing protocols in action” and by “playing around with protocols” – observing the sequence of messages exchanged between two protocol entities, delving down into the details of protocol operation, and causing protocols to perform certain actions and then observing these actions and their consequences. This can be done in simulated scenarios or in a “real” network environment such as the Internet. The Java applets that accompany the textbook take the first approach. In the Wireshark labs, we’ll take the latter approach.   You’ll be running various network applications in different scenarios using a computer on your desk, at home, or in a lab. You’ll observe the network protocols in your computer “in action,” interacting and exchanging messages with protocol entities executing elsewhere in the Internet.    Thus, you and your computer will be an integral part of “live” labs in this class.   You’ll observe, and you’ll learn, by doing.

The basic tool for observing the messages exchanged between executing protocol entities is called a packet sniffer .   As the name suggests, a packet sniffer captures (“sniffs”) messages being sent/received from/by your computer; it will also typically store and/or display the contents of the various protocol fields in these captured messages. A packet sniffer itself is passive. It observes messages being sent and received by applications and protocols running on your computer, but never sends packets itself. Similarly, received packets are never explicitly addressed to the packet sniffer.   Instead, a packet sniffer receives a copy of packets that are sent/received from/by application and protocols executing on your machine.

Figure 1 shows the structure of a packet sniffer. At the right of Figure 1 are the protocols (in this case, Internet protocols) and applications (such as a web browser or ftp client) that normally run on your computer.   The packet sniffer, shown within the dashed rectangle in Figure 1 is an addition to the usual software in your computer, and consists of two parts.   The packet capture library receives a copy of every link-layer frame that is sent from or received by your computer.   Recall from the discussion from section 1.5.2 in the text (Figure 1.20 in the 4 th Edition of the textbook used for the class) that messages exchanged by higher layer protocols   such as HTTP, FTP, TCP, UDP, DNS, or IP all are eventually encapsulated in link-layer frames that are transmitted over physical media such as an Ethernet cable.   In Figure 1, the assumed physical media is an Ethernet, and so all upper layer protocols are eventually encapsulated within an Ethernet frame.   Capturing all link-layer frames thus gives you all messages sent/received from/by all protocols and applications executing in your computer.

The second component of a packet sniffer is the packet analyzer , which displays the contents of all fields within a protocol message.   In order to do so, the packet analyzer must “understand” the structure of all messages exchanged by protocols.   For example, suppose we are interested in displaying the various fields in messages exchanged by the HTTP protocol in Figure 1. The packet analyzer understands the format of Ethernet frames, and so can identify the IP datagram within an Ethernet frame.   It also understands the IP datagram format, so that it can extract the TCP segment within the IP datagram.   Finally, it understands the TCP segment structure, so it can extract the HTTP message contained in the TCP segment.   Finally, it understands the HTTP protocol and so, for example, knows that the first bytes of an HTTP message will contain the string “GET,” “POST,” or “HEAD,” as shown in Figure 2.8 in the text.

We will be using the Wireshark packet sniffer (i.e., www.wireshark.org) for these labs, allowing us to display the contents of messages being sent/received from/by protocols at different levels of the protocol stack.   (Technically speaking, Wireshark is a packet analyzer that uses a packet capture library in your computer). Wireshark is a free network protocol analyzer that runs on Windows, Linux/Unix, and Mac computers. It’s an ideal packet analyzer for our labs – it is stable, has a large user base and well-documented support that includes:

(a)     A user-guide (i.e., http://www.wireshark.org/docs/ )

(b)    Man pages (i.e., http://www.wireshark.org/docs/man-pages/ )

(c)     A detailed FAQ (i.e., http://www.wireshark.org/faq.html )

(d)    Rich functionality that includes the capability to analyze more than 500 protocols

(e)     A well-designed user interface

The Wireshark packet sniffer operates in computers using Ethernet to connect to the Internet, as well as so-called point-to-point protocols such as PPP.

2. Wireshark Lab – Getting Wireshark  

In order to run Wireshark, you will need to have access to a computer that supports both Wireshark and the libpcap packet capture library. If the libpcap software is not installed within your operating system, you will need to install libpcap or have it installed for you in order to use Wireshark.   See http://www.wireshark.org/download.html for a list of supported operating systems and download sites.

Download and install the Wireshark and (if needed) the libpcap software:

·         If needed, download and install the libpcap software.   Pointers to the libpcap software are provided from the Wireshark download pages.   For Windows machines, the libpcap software is known as WinPCap , and can be found at http://winpcap.mirror.ethereal.com/ .

·         Go to www.wiresharkl.org and download and install the Wireshark binary for your computer. It is recommended to download from http://sourceforge.net/project/showfiles.php?group_id=255 which includes a WinPCap bundle option.

·         Download the Wireshark user guide.   You will most likely only need Chapters 1 and 3.

The Wireshark FAQ has a number of helpful hints and interesting tidbits of information, particularly if you have trouble installing or running Wireshark.

3. Wireshark Lab – Running Wireshark

When you run the Wireshark program, the Wireshark graphical user interface shown in Figure 2a will be displayed. Initially, no data will be displayed in the various windows.

Figure 2a: Wireshark Capture Options Window

Figure 2 below shows the original Ethereal graphical user interface along with an explanation of the various areas which applies to both Ethereal and Wireshark.

                       

The Wireshark interface has five major components:

·         The command menus are standard pulldown menus located at the top of the window.   Of interest to us now are the File and Capture menus.   The File menu allows you to save captured packet data or open a file containing previously captured packet data, and exit the Wireshark application.   The Capture menu allows you to begin packet capture.

·         The packet-listing window displays a one-line summary for each packet captured, including the packet number (assigned by Wireshark; this is not a packet number contained in any protocol’s header), the time at which the packet was captured, the packet’s source and destination addresses, the protocol type, and protocol-specific information contained in the packet. The packet listing can be sorted according to any of these categories by clicking on a column name.   The protocol type field lists the highest level protocol that sent or received this packet, i.e., the protocol that is the source or ultimate sink for this packet.

·         The packet-header details window provides details about the packet selected (highlighted) in the packet listing window.   (To select a packet in the packet listing window, place the cursor over the packet’s one-line summary in the packet listing window and click with the left mouse button.).   These details include information about the Ethernet frame and IP datagram that contains this packet. The amount of Ethernet and IP-layer detail displayed can be expanded or minimized by clicking on the right-pointing or down-pointing arrowhead to the left of the Ethernet frame or IP datagram line in the packet details window.   If the packet has been carried over TCP or UDP, TCP or UDP details will also be displayed, which can similarly be expanded or minimized.   Finally, details about the highest level protocol that sent or received this packet are also provided.

·         The packet-contents window displays the entire contents of the captured frame, in both ASCII and hexadecimal format.

·         Towards the top of the Wireshark graphical user interface, is the packet display filter field, into which a protocol name or other information can be entered in order to filter the information displayed in the packet-listing window (and hence the packet-header and packet-contents windows).   In the example below, we’ll use the packet-display filter field to have Wireshark hide (not display) packets except those that correspond to HTTP messages.

4. Wireshark Lab – Taking Wireshark for a Test Run

The best way to learn about any new piece of software is to try it out!   Do the following

1.       Start up your favorite web browser, which will display your selected homepage.

2.       Start up the Wireshark software.   You will initially see a window similar to that shown in Figure 2, except that no packet data will be displayed in the packet-listing, packet-header, or packet-contents window, since Wireshark has not yet begun capturing packets.

3.       To begin packet capture, select the Capture pull down menu and select Options .   This will cause the “Wireshark: Capture Options” window to be displayed, as shown in Figure 3.

Figure 3: Wireshark Capture Options Window

4.       You can use all of the default values in this window.   The network interfaces (i.e., the physical connections) that your computer has to the network will be shown in the Interface pull down menu at the top of the Capture Options window. In case your computer has more than one active network interface (e.g., if you have both a wireless and a wired Ethernet connection), you will need to select an interface that is being used to send and receive packets (mostly likely the wired interface). After selecting the network interface (or using the default interface chosen by Wireshark), click Start .   Packet capture will now begin - all packets being sent/received from/by your computer are now being captured by Wireshark!

5.       After you begin packet capture, you can select Statistics > Protocol Hierarchy from the command menus to obtain a summary of the number of packets of various types that are being captured as shown in Figure 4.

Figure 4: Wireshark Protocol Hierarchy Statistics

6.       While Wireshark is running, enter the URL:         http://gaia.cs.umass.edu/ethereal-labs/INTRO-ethereal-file1.html and have that page displayed in your browser. In order to display this page, your browser will contact the HTTP server at gaia.cs.umass.edu and exchange HTTP messages with the server in order to download this page, as discussed in section 2.2 of the text.   The Ethernet frames containing these HTTP messages will be captured by Wireshark.

7.       After your browser has displayed the INTRO-ethereal-file1.html page, stop Wireshark packet capture by selecting Capture > Stop in the Wireshark in the command menus.   The Wireshark window will display all packets captured since you began packet capture.   The Wireshark window should now look similar to Figure 2. You now have live packet data that contains all protocol messages exchanged between your computer and other network entities! The HTTP message exchanges with the gaia.cs.umass.edu web server should appear somewhere in the listing of packets captured.   But there will be many other types of packets displayed as well (see, e.g., the many different protocol types shown in the Protocol column in Figure 2).   Even though the only action you took was to download a web page, there were evidently many other protocols running on your computer that are unseen by the user.   We’ll learn much more about these protocols as we progress through the text!   For now, you should just be aware that there is often much more going on than “meet’s the eye”!

8.       Type in “http” (without the quotes, and in lower case – all protocol names are in lower case in Wireshark) into the display filter specification window at the top of the main Wireshark window.   Then select Apply (to the right of where you entered “http”).   This will cause only HTTP message to be displayed in the packet-listing window.  

9.       The HTTP GET message that was sent from your computer to the gaia.cs.umass.edu HTTP server should be shown among the first few http message shown in the packet-listing window. When you select the HTTP GET message, the Ethernet frame, IP datagram, TCP segment, and HTTP message header information will be displayed in the packet-header window. Recall that the HTTP GET message that is sent to the gaia.cs.umass.edu web server is contained within a TCP segment, which is contained (encapsulated) in an IP datagram, which is encapsulated in an Ethernet frame. If this process of encapsulation isn’t quite clear yet, review section 1.5 in the text. By clicking on the expansion buttons (+ or -) to the left side of the packet details window,   you can minimize or maximize the amount of   Frame, Ethernet, Internet Protocol, and Transmission Control Protocol information displayed.   Maximize the amount information displayed about the HTTP protocol.   Your Wireshark display should now look roughly as shown in Figure 5 (Note, in particular, the minimized amount of protocol information for all protocols except HTTP, and the maximized amount of protocol information for HTTP in the packet-header window).

10.   Exit   Wireshark

Figure 5: Wireshark Display After Step 9

Congratulations!   You’ve now completed the first lab.

5. Wireshark Lab – What to hand in

The goal of this first lab was primarily to introduce you to Wireshark. The following questions will demonstrate that you’ve been able to get Wireshark up and running, and have explored some of its capabilities. Answer the following questions, based on your Wireshark experimentation.

1.       What is   the MAC address of your Host? You can find this in the frame level information.

2.       List the different protocols that appear in the protocol column in the unfiltered packet-listing window in step 4.7 above.

3.       How long did it take from when the HTTP GET message was sent until the HTTP OK reply was received? (By default, the value of the Time column in the packet-listing window is the amount of time, in seconds, since Wireshark tracing began.   To display the Time field in time-of-day format, select the Wireshark View pull down menu, then select Time Display Format , then select Time-of-day .)

4.       What is the Internet address of the gaia.cs.umass.edu (also known as www-net.cs.umass.edu)?   What is the Internet address of your computer?

5.       Print the two HTTP messages displayed in step 4.9 above. To do so, select Print from the Wireshark File command menu, and select “ Selected Packet Only” under Packet Range and “As displayed” under Packet Format and then click OK.

Save your capture in a capture file named Nxxx.cap where Nxxx is your student ID.

Submit this capture file and the answers to the questions above.

Email your assignment (archive) file to your TA.

VI.        Deliverables

  • Electronic: Your assignment (archive) file must be emailed to the TA.   The file must be created and sent by the beginning of class.   After the class period, the homework is late.   The email clock is the official clock.  
  • Written: Printout of the file(s) included in your assignment (archive) file. The cover page supplied on the next page must be the first page of your assignment file

      Fill in the blank area for each field.       

The sequence of the hardcopy submission is:

1.       Cover sheet

1.       Assignment Answer Sheet(s)

VII.       Sample Cover Sheet

Name ________________________   Username: ______________    Date: ____________             (last name, first name) Section: ___________

Assignment 2

Assignment Layout (25%)

o Assignment is neatly assembled on 8 1/2 by 11 paper.

o Cover page with your name (last name first followed by a comma then first name), username and section number with a signed statement of independent effort is included.

o Answers to all assignment questions are correct. o File name is correct.

Total   in points                                                                       ___________________ Professor’s Comments:

Affirmation of my Independent Effort: _____________________________

                                                                                    (Sign here)

This week: the arXiv Accessibility Forum

Help | Advanced Search

Computer Science > Distributed, Parallel, and Cluster Computing

Title: dna sequence alignment: an assignment for openmp, mpi, and cuda/opencl.

Abstract: We present an assignment for a full Parallel Computing course. Since 2017/2018, we have proposed a different problem each academic year to illustrate various methodologies for approaching the same computational problem using different parallel programming models. They are designed to be parallelized using shared-memory programming with OpenMP, distributed-memory programming with MPI, and GPU programming with CUDA or OpenCL. The problem chosen for this year implements a brute-force solution for exact DNA sequence alignment of multiple patterns. The program searches for exact coincidences of multiple nucleotide strings in a long DNA sequence. The sequential implementation is designed to be clear and understandable to students while offering many opportunities for parallelization and optimization. This assignment addresses key concepts many students find difficult to apply in practical scenarios: race conditions, reductions, collective operations, and point-to-point communications. It also covers the problem of parallel generation of pseudo-random sequences and strategies to notify and stop speculative computations when matches are found. This assignment serves as an exercise that reinforces basic knowledge and prepares students for more complex parallel computing concepts and structures. It has been successfully implemented as a practical assignment in a Parallel Computing course in the third year of a Computer Engineering degree program. Supporting materials for this and previous assignments in this series are publicly available.
Comments: 3 pages, 1 figure, 1 artifact and reproducibility appendix. Accepted for presentation at EduHPC-24: Workshop on Education for High-Performance Computing, to be held during Supercomputing 2024 conference
Subjects: Distributed, Parallel, and Cluster Computing (cs.DC)
 classes: K.3.2; D.1.3
Cite as: [cs.DC]
  (or [cs.DC] for this version)
  Focus to learn more arXiv-issued DOI via DataCite (pending registration)

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

assignment on computer data

  • Storage management and strategy

assignment on computer data

ÐнаÑÑаÑиÑ-Ðевко -

Storage technology explained: Vector databases at the core of AI

We look at the use of vector data in ai and how vector databases work, plus vector embedding, the challenges for storage of vector data and the key suppliers of vector database products.

Antony Adshead

  • Antony Adshead, Storage Editor

Artificial intelligence (AI) processing rests on the use of vectorised data. In other words, AI turns real-world information into data that can be used to gain insight, searched for and manipulated.

Vector databases are at the heart of this, because that’s how data created by AI modelling is stored and from where it is accessed during AI inference.

In this article, we look at vector databases and how vector data is used in AI and machine learning. We examine high-dimension data, vector embedding, the storage challenges of vector data and the suppliers that offer vector database products. 

What is high-dimension data?

Vector data is a sub-type of so-called high-dimension data. This is data – to simplify significantly – where the number of features or values of a datapoint far exceed the samples or data points collected.

Low-dimension data – i.e. not many values for each data point – has been more common historically. High-dimension data arises as the ability to capture large amounts of information becomes possible. Contemporary AI that processes speech or images with many possible attributes, contexts, etc, provides a good example.

What are vectors?

Vectors are one of a number of types of data in which quantities are referred to by single or more complex arrangements of numbers.

So, in mathematics, a scalar is a single number, such as 5 or 0.5, while a vector is a one-dimensional array of numbers, such as [0.5, 5]. Then a matrix extends this into two dimensions, such as:

 [5, 0.5],

 [0.5, 5]].

Finally, tensors extend this concept into three or more dimensions. A 3D tensor could represent colours in an image (based on values for red, green and blue), while a 4D tensor could add the dimension of time by stringing together or stacking 3D tensors in a video use case.

Tensors add further dimensions and are multi-dimensional arrays of numbers that can represent complex data. That’s why they have lent themselves to use in AI and machine learning and deep learning frameworks such as TensorFlow and PyTorch.

What is vector embedding?

In AI, tensors are used to store and manipulate data. Tensor-based frameworks provide tools to create tensors and perform computations on them.

For example, a ChatGPT request in natural language is parsed and processed for word meaning, semantic context and so on, and then represented in multi-dimensional tensor format. In other words, the real-world subject is converted to something on which mathematical operations can be carried out. This is called vector embedding.

To gain answers to the query, the numerical (albeit complex) result of parsing and processing can be compared to tensor-based representations of existing – i.e., already vector-embedded data – and an answer supplied. You can transfer that basic concept – ingest and represent; compare and respond – to any AI use case, such as images or buyer behaviour. 

What is a vector database?

Vector databases store high-dimensional vector data. Data points are stored in clusters based on similarity .

Vector databases deliver the kind of speed and performance needed for generative AI use cases. Gartner has said that by 2026, more than 30% of enterprises will have adopted vector databases to build foundation models with relevant business data.

While traditional relational databases are built on rows and columns, datapoints in a vector database take the form of vectors in a number of dimensions. Traditional databases are the classic manifestation of structured data. Each column represents a variable with each row a value of that.

Meanwhile, vector databases can handle values on values that exist along multiple continua represented via vectors. So, they don’t have to stick to pre-set variables but can represent the kind of characteristics one might find in what we think of as unstructured data – shades of colours, the layout of pixels in an image and what they may represent when interpreted as a whole, for example.

It isn’t impossible to transform unstructured data sources into a traditional relational database to prepare it for AI, but it’s not a trivial matter.

The difference is apparent in search on traditional databases and vector databases. On a SQL database, you search for explicit, definite values, such as keywords, or numerical values and you rely on exact matches to retrieve results you want.

Vector search represents data in a less precise way. There may be no exact match but if modelled effectively it will return results that relate to the thing being looked for and may result from hidden patterns and relationships that a traditional database would not be able to infer.

What are the storage challenges of vector databases?

AI modelling involves writing vector embeddings into a vector database for very large quantities of often nonmathematical data, like words, sounds or images. AI inference then compares vector-embedded data using the model and newly supplied queries.

This is carried out by very high performance processors, most notably by graphical processing units (GPUs) that offload very large quantities processing from server CPUs.

Vector databases can be subject to extreme I/O demands – especially during modelling – and will need the capability to scale massively and potentially offer portability of data between locations to enable the most efficient processing.

Vector databases can be indexed to accelerate searches and can measure the distance between vectors to provide results based on similarity.

That facilitates tasks such as recommendation systems, semantic search, image recognition and natural language processing tasks. 

Who supplies vector databases?

Proprietary and open source database products include those from DataStax, Elastic, Milvus, Pinecone, Singlestore and Weaviate.

There are also vector database and database search extensions to existing databases, such as PostgreSQL’s open source pgvector, provision of vector search in Apache Cassandra, and vector database capability in Redis.

There are also platforms with vector database capabilities integrated, such as IBM watsonx.data.

Meanwhile, the hyperscaler cloud providers – AWS , Google Cloud and Microsoft Azure – provide vector database and search in their own offerings as well as from third parties via their marketplaces.

Read more about vector databases

  • 10 top vector database options for similarity searches . Vector databases excel in different areas of vector searches, including sophisticated text and visual options. Choose the platform that best fits organizational needs.
  • Top 10 industry use cases for vector databases . Vector database popularity is rising as generative AI use increases across all industries. Here are 10 top use cases for vector databases that generate organizational value.

Read more on Storage management and strategy

assignment on computer data

10 top vector database options for similarity searches

DonaldFarmer

Hazelcast adds vector search

AdrianBridgwater

tensor processing unit (TPU)

StephenBigelow

As AI evolves, Forrester Research analysts believe agentic AI and automating complex business processes will be the next step ...

As AI adoption has increased, the concept of AI transparency has broadened in scope and grown in importance. Learn what it means ...

The federal government has proposed a rule outlining cybersecurity and developmental reporting requirements for both AI ...

While cybersecurity risk should inform budget and strategy decisions, quantifying risk and the ROI of mitigation efforts isn't ...

Arctic Wolf recently observed the Akira ransomware gang compromising SonicWall SSL VPN accounts, which could be connected to a ...

Strong cyber-risk management demands collaboration and coordination across business management, IT operations, security and ...

CI/CD processes help deploy code changes to networks. Integrating a CI/CD pipeline into automation makes networks more reliable, ...

Predictive analytics can project network traffic flows, predict future trends and reduce latency. However, tools continue to ...

Test scripts are the heart of any job in pyATS. Best practices for test scripts include proper structure, API integration and the...

Rocky Linux and AlmaLinux are new distributions created after Red Hat announced the discontinuation of CentOS. These ...

The Broadcom CEO says public cloud migration trauma can be cured by private cloud services like those from VMware, but VMware ...

New capabilities for VMware VCF can import and manage existing VMware services through a single console interface for a private ...

Data governance isn't plug and play: Organizations must select which data governance framework best fits their business goals and...

Updates to HeatWave and Database 23ai, along with the introduction of Intelligent Data Lake, are all aimed at better enabling ...

With more employees of organizations now using artificial intelligence tools to inform business decisions, guidelines that ensure...

More From Forbes

The challenges of machine learning and computer vision adoption in manufacturing and how to overcome them.

Forbes Technology Council

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

Przemek Szleter is the founder and CEO of DAC.digital, with over 16 years of professional experience as a business & IT executive.

Machine learning (ML) and computer vision (CV) technologies are vital branches of artificial intelligence (AI) that help automate tasks and increase efficiency across industries. Experts predict that the market size for these technologies will grow to $503.40 billion for machine learning and $46.96 billion for computer vision by 2030.

Implementing these technologies offers many opportunities to increase productivity and efficiency in manufacturing processes. However, it also comes with challenges that can sometimes feel overwhelming. Every challenge can be overcome, so let’s discuss solutions to several common issues.

High Initial Costs

Incorporating advanced technologies like ML and CV in manufacturing involves a significant upfront investment in technology, infrastructure and skilled personnel. The expenses associated with buying AI hardware, creating tailored software, integrating AI with current systems and training employees can be pretty high, especially for small and medium-sized enterprises (SMEs).

Several solutions can help control initial implementation costs, and they include:

Google Is Deleting Gmail Accounts—3 Steps Needed To Keep Yours

Blackrock issues serious fed warning as crypto suddenly braces for a predicted 50% bitcoin price crash, harris surges past trump in election betting markets after first presidential debate.

• Gradual Implementation: Starting with small-scale projects with a high return on investment (ROI), such as predictive maintenance or automated quality control, can demonstrate value and justify further investment.

• Cloud-Based AI Solutions: Manufacturers can utilize cloud-based AI services rather than investing in costly on-premises infrastructure, reducing the need for upfront capital expenditure by providing pay-as-you-go pricing models.

• Government Grants And Incentives: Seek government grants, subsidies or tax incentives for advanced manufacturing technologies.

Data Collection, Annotation, Management And Quality

Machine learning and computer vision systems rely heavily on large datasets for training and operation. Quality data allows ML and CV models to be accurate and make fewer mistakes. Manufacturing environments often produce heterogeneous data from various sources, including sensors, machines and enterprise systems. Managing, cleaning and ensuring the quality of this data can be complex and resource-intensive.

There are several ways to ensure quality data for computer vision and machine learning models, and they include:

• Data Integration Platforms: These platforms can gather, standardize and process data from diverse sources into a unified format suitable for AI analysis.

• Data Governance Frameworks: Establishing strong policies ensures data accuracy, consistency and security. It includes regular audits, data validation processes and standardized data entry procedures.

• Synthetic Data: When real-world data is scarce or difficult to collect, manufacturers can use synthetic data to train AI models . It’s beneficial for training computer vision models with limited labeled data.

Integration With Legacy Systems

Some legacy systems in factories and other manufacturing facilities aren’t compatible with advanced integrations such as AI. Integrating AI with these outdated systems can be challenging, leading to disruptions in production and increased costs. However, there are a few ways to make it easier:

• Incremental Integration: Gradually integrate new ML and CV solutions by identifying and incorporating specific processes that would benefit the most from AI rather than attempting a full-scale overhaul. It reduces risk and allows for smoother transitions.

• Middleware Solutions: Use platforms that bridge AI systems and legacy infrastructure, enabling communication and data exchange without requiring complete system overhauls.

• Custom APIs: Develop custom APIs (application programming interfaces) to facilitate data exchange between legacy systems and new AI technologies.

Integrating AI into the manufacturing industry necessitates a skilled workforce proficient in AI, data science, machine learning and software development. The manufacturing sector currently needs more professionals possessing these capabilities.

There are several ways to fill in those skill gaps and ensure a smooth adoption of new technologies like machine learning and computer vision:

• Training And Upskilling Programs: Invest in training programs to upskill current employees, focusing on AI, machine learning, data analytics and relevant software tools.

• Partnerships With Educational Institutions: Work closely with universities, technical colleges and training providers to develop customized courses and certifications to equip the workforce with the skills to integrate AI technologies seamlessly.

• Hiring Specialized Talent: Look to hire data scientists, computer vision and machine learning specialists, and software engineers with a proven track record in the manufacturing industry or a strong ability to adapt quickly to specific requirements.

Scalability Issues

Moving AI solutions from small-scale test runs to full-scale implementation can pose significant challenges because of variations in data accessibility, the need for system integration and the complexities of day-to-day operations. Here are some of the actions to take to make scalability easier to achieve:

• Modular ML And CV systems: Modular and scaleable systems enable easy expansion and adaptation to different parts of the manufacturing process.

• Standardization: Scaling can be more accessible when data formats, processes and AI models across different departments and plants are standardized.

• Continuous Monitoring And Adaptation: Continuous monitoring of AI systems enables optimal scaling performance. This way, manufacturers can adapt and optimize the machine learning and computer vision models based on performance feedback and changing production needs.

Careful Consideration And Planning For Successful Implementation

Although adopting new technologies like computer vision and machine learning presents several challenges, they can bring long-scale improvements that automate manufacturing processes. The key lies in carefully considering whether the benefits outweigh the costs and implementing planning that will be within the budget.

Ultimately, automated processes and predictive maintenance can reduce future costs, bringing value and considerable savings after initial investments.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Przemek Szleter

  • Editorial Standards
  • Reprints & Permissions

MIT Technology Review

  • Newsletters

Google says it’s made a quantum computing breakthrough that reduces errors

The company’s surface code technique allows its quantum bits to faithfully store and manipulate data for longer, which could pave the way for useful quantum computers.

  • Sophia Chen archive page

""

Google researchers claim to have made a breakthrough in quantum error correction, one that could pave the way for quantum computers that finally live up to the technology’s promise.

Proponents of quantum computers say the machines will be able to benefit scientific discovery in fields ranging from particle physics to drug and materials design—if only their builders can make the hardware behave as intended. 

One major challenge has been that quantum computers can store or manipulate information incorrectly, preventing them from executing algorithms that are long enough to be useful. The new research from Google Quantum AI and its academic collaborators demonstrates that they can actually add components to reduce these errors. Previously, because of limitations in engineering, adding more components to the quantum computer tended to introduce more errors. Ultimately, the work bolsters the idea that error correction is a viable strategy toward building a useful quantum computer. Some critics had doubted that it was an effective approach, according to physicist Kenneth Brown of Duke University, who was not involved in the research. 

“This error correction stuff really works, and I think it’s only going to get better,” wrote Michael Newman, a member of the Google team, on X. (Google, which posted the research to the preprint server arXiv in August, declined to comment on the record for this story.) 

Quantum computers encode data using objects that behave according to the principles of quantum mechanics. In particular, they store information not only as 1 s and 0 s, as a conventional computer does, but also in “superpositions” of 1 and 0 . Storing information in the form of these superpositions and manipulating their value using quantum interactions such as entanglement (a way for particles to be connected even over long distances) allows for entirely new types of algorithms.

In practice, however, developers of quantum computers have found that errors quickly creep in because the components are so sensitive. A quantum computer represents 1 , 0 , or a superposition by putting one of its components in a particular physical state, and it is too easy to accidentally alter those states. A component then ends up in a physical state that does not correspond to the information it’s supposed to represent. These errors accumulate over time, which means that the quantum computer cannot deliver accurate answers for long algorithms without error correction.

To perform error correction, researchers must encode information in the quantum computer in a distinctive way. Quantum computers are made of individual components known as physical qubits, which can be made from a variety of different materials, such as single atoms or ions. In Google’s case, each physical qubit consists of a tiny superconducting circuit that must be kept at an extremely cold temperature. 

Early experiments on quantum computers stored each unit of information in a single physical qubit. Now researchers, including Google’s team, have begun experimenting with encoding each unit of information in multiple physical qubits. They refer to this constellation of physical qubits as a single “logical” qubit, which can represent 1 , 0 , or a superposition of the two. By design, the single “logical” qubit can hold onto a unit of information more robustly than a single “physical” qubit can. Google’s team corrects the errors in the logical qubit using an algorithm known as a surface code, which makes use of the logical qubit’s constituent physical qubits.

In the new work, Google made a single logical qubit out of varying numbers of physical qubits. Crucially, the researchers demonstrated that a logical qubit composed of 105 physical qubits suppressed errors more effectively than a logical qubit composed of 72 qubits. That suggests that putting increasing numbers of physical qubits together into a logical qubit “can really suppress the errors,” says Brown. This charts a potential path to building a quantum computer with a low enough error rate to perform a useful algorithm, although the researchers have yet to demonstrate they can put multiple logical qubits together and scale up to a larger machine. 

The researchers also report that the lifetime of the logical qubit exceeds the lifetime of its best constituent physical qubit by a factor of 2.4. Put another way, Google’s work essentially demonstrates that it can store data in a reliable quantum “memory.”

However, this demonstration is just a first step toward an error-corrected quantum computer, says Jay Gambetta, the vice president of IBM’s quantum initiative. He points out that while Google has demonstrated a more robust quantum memory, it has not performed any logical operations on the information stored in that memory. 

“At the end of the day, what matters is: How big of a quantum circuit could you run?” he says. (A “quantum circuit” is a series logic of operations executed on a quantum computer.) “And do you have a path to show how you’re going to run bigger and bigger quantum circuits?”

IBM, whose quantum computers are also composed of qubits made of superconducting circuits, is taking an error correction approach that’s different from Google’s surface code method.  It thinks this method, known as low-density parity-check code, will be easier to scale, with each logical qubit requiring fewer physical qubits to achieve comparable error suppression rates. By 2026, IBM intends to demonstrate that it can make 12 logical qubits out of 244 physical qubits, says Gambetta.

Other researchers are exploring other promising approaches, too. Instead of superconducting circuits, a team affiliated with the Boston-based quantum computing company QuEra uses neutral atoms as physical qubits. Earlier this year, it published in Nature a study showing that it had executed algorithms using up to 48 logical qubits made of rubidium atoms.

Gambetta cautions researchers to be patient and not to overhype the progress. “I just don’t want the field to think error correction is done,” he says. Hardware development simply takes a long time because the cycle of designing, building, and troubleshooting is time consuming, especially when compared with software development. “I don’t think it’s unique to quantum,” he says. 

Keep Reading

Most popular.

BARCELONA, SPAIN - JULY 19: A passenger takes pictures of a screen displaying delayed flights at Barcelona Aiport on July 19, 2024 in Barcelona, Spain. Businesses, travel companies and Microsoft users across the globe were among those affected by a tech outage today. (Photo by David Ramos/Getty Images)

How to fix a Windows PC affected by the global outage

There is a known workaround for the blue screen CrowdStrike error that many Windows computers are currently experiencing. Here’s how to do it.

  • Rhiannon Williams archive page

He Jiankui in profile looking to a computer screen out of frame

A controversial Chinese CRISPR scientist is still hopeful about embryo gene editing. Here’s why.

He Jiankui, who went to prison for three years for making the world’s first gene-edited babies, talked to MIT Technology Review about his new research plans.

  • Zeyi Yang archive page

a protractor, a child writing math problems on a blackboard and a German text on geometry

Google DeepMind’s new AI systems can now solve complex math problems

AlphaProof and AlphaGeometry 2 are steps toward building systems that can reason, which could unlock exciting new capabilities.

person using the voice function of their phone with the openai logo and a sound wave

OpenAI has released a new ChatGPT bot that you can talk to

The voice-enabled chatbot will be available to a small group of people today, and to all ChatGPT Plus users in the fall. 

  • Melissa Heikkilä archive page

Stay connected

Get the latest updates from mit technology review.

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.

Kamala Harris and Donald Trump are neck and neck

BleepingComputer.com logo

New RAMBO attack steals data using RAM in air-gapped computers

Bill toulas.

  • September 7, 2024

Airgapped

A novel side-channel attack dubbed  "RAMBO" (Radiation of Air-gapped Memory Bus for Offense) generates electromagnetic radiation from a device's RAM to send data from air-gapped computers.

Air-gapped systems, typically used in mission-critical environments with exceptionally high-security requirements, such as governments, weapon systems, and nuclear power stations, are isolated from the public internet and other networks to prevent malware infections and data theft.

Although these systems are not connected to a broader network, they can still be infected by rogue employees introducing malware through physical media (USB drives) or sophisticated supply chain attacks carried out by state actors.

The malware can operate stealthily to modulate the air-gapped system's RAM components in a way that allows the transfer of secrets from the computer to a recipient nearby.

The latest method that falls into this category of attacks comes from Israeli university researchers led by Mordechai Guri, an experienced expert in covert attack channels who previously developed methods to leak data using  network card LEDs , USB drive RF signals , SATA cables , and power supplies .

How the RAMBO attack works

To conduct the Rambo attack, an attacker plants malware on the air-gapped computer to collect sensitive data and prepare it for transmission. It transmits the data by manipulating memory access patterns (read/write operations on the memory bus) to generate controlled electromagnetic emissions from the device's RAM.

These emissions are essentially a byproduct of the malware rapidly switching electric signals (On-Off Keying "OOK") within the RAM, a process that isn't actively monitored by security products and cannot be flagged or stopped.

Code to perform the OOK modulation

The emitted data is encoded into "1" and "0," represented in the radio signals as "on" and "off." The researchers opted for using Manchester code to enhance error detection and ensure signal synchronization, reducing the chances for incorrect interpretations at the receiver's end.

The attacker may use a relatively inexpensive Software-Defined Radio (SDR) with an antenna to intercept the modulated electromagnetic emissions and convert them back into binary information.

Signal of the word "DATA"

Performance and limitations

The RAMBO attack achieves data transfer rates of up to 1,000 bits per second (bps), equating to 128 bytes per second, or 0.125 KB/s.

At this rate, it would take around 2.2 hours to exfiltrate 1 megabyte of data, so RAMBO is more suitable for stealing small amounts of data like text, keystrokes, and small files.

The researchers found that keylogging can be performed in real-time when testing the attack. However, stealing a password takes 0.1 to 1.28 seconds, a 4096-bit RSA key takes between 4 and 42 seconds, and a small image between 25 to 250 seconds, depending on the speed of the transmission.

Data transmissions speeds

Fast transmissions are limited to a maximum range of 300 cm (10 ft), with the bit error rate being 2-4%. Medium-speed transmissions increase the distance to 450 cm (15 ft) for the same error rate. Finally, slow transmissions with nearly zero error rates can work reliably over distances of up to 7 meters (23 ft).

The researchers also experimented with transmissions up to 10,000 bps but found that anything surpassing 5,000 bps results in a very low signal-to-noise ratio for effective data transmission.

Stopping RAMBO

The technical paper published on Arxiv provides several mitigation recommendations to mitigate the RAMBO attack and similar electromagnetic-based covert channel attacks, but they all introduce various overheads.

Recommendations include strict zone restrictions to enhance physical defense, RAM jamming to disrupt covert channels at the source, external EM jamming to disrupt radio signals, and Faraday enclosures to block air-gapped systems from emanating EM radiation externally.

The researchers tested RAMBO against sensitive processes running inside virtual machines and found that it remained effective.

However, as the host's memory is prone to various interactions with the host OS and other VMs, the attacks will likely be disrupted quickly.

Related Articles:

New PIXHELL acoustic attack leaks secrets from LCD screen noise

Linux kernel impacted by new SLUBStick cross-cache attack

  • Password Leak
  • Side-channel attack
  • Previous Article
  • Next Article

el_chavez Photo

el_chavez - 3 days ago

300cm is more like 10 ft 1ft ~30.48cm

johnlsenchak Photo

johnlsenchak - 3 days ago

Saigua Photo

Saigua - 2 days ago

Didn't know anything would be going 974-975 kHz (that many MHz being low key) in DDR4, ha! I would want to know that shows up in scheduler notes etc, good for distrusting a particular utility/driver/template/munitionware. Bet it tags TPM activities.

ken_smon Photo

ken_smon - 1 day ago

So the bad actor needs to first install software on your air-gapped machine, then get within 10 feet (clear line-of-sight with no cage) of said air-gapped machine to steal data. That person is a trusted employee.

Post a Comment Community Rules

You need to login in order to post a comment.

Not a member yet? Register Now

You may also like:

Mandiant mWise Conference 2024

Payment gateway data breach affects 1.7 million credit card owners

Patch Tuesday

Microsoft September 2024 Patch Tuesday fixes 4 zero-days, 79 flaws

Windows 11

Microsoft to start force-upgrading Windows 22H2 systems next month

Flipper Zero

Flipper Zero releases Firmware 1.0 after three years of development

Malwarebytes Anti-Malware Logo

Malwarebytes Anti-Malware

AdwCleaner Logo

BitDefender Uninstall Tool

Sign in with Twitter button

Help us understand the problem. What is going on with this comment?

  • Abusive or Harmful
  • Inappropriate content
  • Strong language

Read our posting guidelinese to learn what content is prohibited.

  • Trending Now
  • Foundational Courses
  • Data Science
  • Practice Problem
  • Machine Learning
  • System Design
  • DevOps Tutorial

Data Types in Programming

In Programming, data type is an attribute associated with a piece of data that tells a computer system how to interpret its value. Understanding data types ensures that data is collected in the preferred format and that the value of each property is as expected.

Data Types in Programming

Table of Content

What are Data Types in Programming?

  • Common Data Types in Programming
  • Common Primitive Data Types in Programming
  • Common Composite Data Types
  • Common User-Defined Data Types
  • Dynamic vs Static Typing in Programming
  • Type Casting in Programming
  • Variables and Data Types in Programming
  • Type Safety in Programming
An attribute that identifies a piece of data and instructs a computer system on how to interpret its value is called a data type.

The term “data type” in software programming describes the kind of value a variable possesses and the kinds of mathematical, relational, or logical operations that can be performed on it without leading to an error. Numerous programming languages, for instance, utilize the data types string, integer, and floating point to represent text, whole numbers, and values with decimal points, respectively. An interpreter or compiler can determine how a programmer plans to use a given set of data by looking up its data type.

The data comes in different forms. Examples include:

  • your name – a string of characters
  • your age – usually an integer
  • the amount of money in your pocket- usually decimal type
  • today’s date – written in date time format

Common Data Types in Programming :

Data Types in Programming

1. Primitive Data Types:

Primitives are predefined data types that are independent of all other kinds and include basic values of particular attributes, like text or numeric values. They are the most fundamental type and are used as the foundation for more complex data types. Most computer languages probably employ some variation of these simple data types.

2. Composite Data Types:

Composite data types are made up of various primitive kinds that are typically supplied by the user. They are also referred to as user-defined or non-primitive data types. Composite types fall into four main categories: semi-structured (stores data as a set of relationships); multimedia (stores data as images, music, or videos); homogeneous (needs all values to be of the same data type); and tabular (stores data in tabular form).

3. User Defined Data Types:

A user-defined data type (UDT) is a data type that derived from an existing data type. You can use other built-in types already available and create your own customized data types.

Common Primitive Data Types in Programming:

Some common primitive datatypes are as follow:

Data Type

Definition

Examples

Integer (int)

represent numeric data type for numbers without fractions

300, 0 , -300

Floating Point (float)

represent numeric data type for numbers with fractions

34.67, 56.99, -78.09

Character (char)

represent single letter, digit, punctuation mark, symbol, or blank space

a , 1, !

Boolean (bool)

True or false values

true- 1, false- 0

Date

Date in the YYYY-MM-DD format (ISO 8601 syntax)

2024-01-01

Time

Time in the hh:mm:ss format for the time of day, time since an event, or time interval between events

12:34:20

Datetime

Date and time together in the YYYY-MM-DD hh:mm:ss format

2024 -01-01 12:34:20

Common Composite Data Types:

Some common composite data types are as follow:

Data Type

Definition

Example

String (string)

Sequence of characters, digits, or symbols—always treated as text

hello , ram , i am a girl

array

List with a number of elements in a specific order—typically of the same type

arr[4]= [0 , 1 , 2 , 3 ]

pointers

Blocks of memory that are dynamically allocated are managed and stored

*ptr=9

Common User-Defined Data Types:

Some common user defined data types are as follow:

Data Type

Definition

Example

Enumerated Type (enum)

Small set of predefined unique values (elements or enumerators) that can be text-based or numerical

Sunday -0, Monday -1

Structure

allows to combining of data items of different kinds

struct s{ …}

Union

contains a group of data objects that can have varied data types

union u {…}

Static vs. Dynamic Typing in Programming:

CharacteristicStatic TypingDynamic Typing
Requires explicit definition of data typesData types are determined at runtime
Programmer explicitly declares variable typesType declaration is not required
Early error detection during compile timeErrors may surface at runtime
Explicit types can enhance code readabilityCode may be more concise but less explicit
Less flexible as types are fixed at compile timeMore flexible, allows variable types to change
Requires a separate compilation stepNo separate compilation step needed
C, Java, SwiftPython, JavaScript, Ruby

Type Casting in Programming:

  • Converting a single data type value—such as an integer int, float, or double—into another data type is known as typecasting. You have the option of doing this conversion manually or automatically. The conversion is done in two ways, automatically by the compiler and manually by a programmer.
  • Type casting is sometimes known as type conversion. For example, a programmer can type cast a long variable value into an int if they wish to store it in the program as a simple integer. Thus, type casting is a technique that allows users to utilize the cast operator to change values from one data type to another.
  • Type casting is used when imagine you have an age value, let’s say 30, stored in a program. You want to display a message on a website or application that says “Your age is: 30 years.” To display it as part of the message (a string), you would need to convert the age (an integer) to a string.
  • Simple explanation of type casting can be done by this example:
  • Imagine you have two types of containers: one for numbers and one for words. Now, let’s say you have a number written on a piece of paper, like “42,” and you want to put it in the container meant for words. Type casting is like taking that number, converting it into words, and then putting it in the container for words. Similarly, in programming, you might have a number (like 42) stored as one type, and you want to use it as if it were another type (like a word or text). Type casting helps you make that conversion.

Types of Type Casting:

The process of type casting can be performed in two major types in a C program. These are:

  • Implicit – done internally by compiler.
  • Explicit – done by programmer manually.

Syntax for Type Casting:

<datatype> variableName = (<datatype>) value;

Example of Type Casting:

1. converting int into double, 2. automatic conversion of double to int:, variables and data types in programming:.

The name of the memory area where data can be stored is called a variable . In a program, variables are used to hold data; they have three properties: name, value, and type. A variable’s value may fluctuate while a program is running.

Data type characterizes a variable’s attribute; actions also rely on the data type of the variables, as does the data that is stored in variables. The sorts of data that a variable can store are specified by its data types. Numerous built-in data types, including int, float, double, char, and bool, are supported by C programming. Every form of data has a range of values that it can store and a memory usage limit.

Example: Imagine a box labeled “age.” You can put a number like 25 in it initially, and later, you might change it to 30. A box labeled “number” is designed for holding numbers (like 42). Another box labeled “name” is designed for holding words or text (like “John”). So, in simple terms, a variable is like a labeled box where you can put things, and the data type is like a tag on the box that tells you what kind of things it can hold. Together, they help the computer understand and manage the information you’re working with in a program.

Type Safety in Programming:

Type safety in a programming language is an abstract construct that enables the language to avoid type errors .

There is an implicit level of type safety in all programming languages. Therefore, the compiler will use the type safety construct to validate types during program compilation , and it will raise an error if we attempt to assign the incorrect type to a variable. Type safety is verified during a program’s runtime in addition to during compilation. The type safety feature makes sure that no improper operations are carried out on the underlying object by the code.

Take the machine’s 32-bit quantity, for instance. It can be used to represent four ASCII characters, an int, or a floating point. In light of the situation, these interpretations might be accurate. For example, when using assembly, the programmer bears full responsibility for maintaining track of the data types. When a machine-level floating point addition is performed on a 32-bit number that actually represents an integer, the result is indeterminate, which means that the outcomes may vary from computer to computer.

In Conclusion, Programmers can create dependable and efficient code by utilizing data types. Data types can help organizations manage their data more effectively, from collection to integration, in addition to helping programmers write more effective code.

  • Data types are the basis of programming languages.
  • There are various kind of data types available according to the various kind of data available.
  • Data types are of 3 types.
  • Primitive Data type: int, float, char, bool
  • Composite Data Types: string, array, pointers
  • User Defined Data Type

author

Please Login to comment...

Similar reads.

  • Programming
  • Best PS5 SSDs in 2024: Top Picks for Expanding Your Storage
  • Best Nintendo Switch Controllers in 2024
  • Xbox Game Pass Ultimate: Features, Benefits, and Pricing in 2024
  • Xbox Game Pass vs. Xbox Game Pass Ultimate: Which is Right for You?
  • Full Stack Developer Roadmap [2024 Updated]

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

What time is the debate tonight? How to watch Trump, Harris face off ahead of 2024 election

Tuesday night's presidential debate marks the first between donald trump and kamala harris..

assignment on computer data

Debate day has arrived.

The first debate between former President Donald Trump and Vice President Kamala Harris is scheduled to take place Tuesday night at the National Constitution Center in Philadelphia with less than two months until the election .

Trump announced in a post on Truth Social that he had accepted the Sept. 10 debate on ABC  under the same conditions  as the June CNN debate against Biden.

The rules for Tuesday night's debate mean each candidate’s  microphone is only turned on  when it is their turn to speak, there is no studio audience and the candidates aren’t allowed to talk to their staff during breaks or bring any notes with them. Both candidates are provided with only a pen and pad and a bottle of water.

Harris-Trump presidential debate: Follow here for live updates

Here's what you need to know about the debate, including what time it starts and how to watch it.

What time is the debate tonight?

The debate is set to begin at 9 p.m. ET.

How to watch the presidential debate

The debate will air on ABC and stream on ABC News Live, Disney+ and Hulu, according to ABC.

On Tuesday, September 10, 2024, at 9pm EDT, USA TODAY will stream The ABC News Presidential Debate Simulcast on the USA TODAY channel available on most smart televisions and devices. 

Who are the moderators of tonight's debate?

"World News Tonight" anchor and managing editor David Muir, along with ABC News Live "Prime" anchor Linsey Davis, will serve as the moderators, according to ABC.

The primetime pre-debate special, "Race for the White House," will be anchored by Martha Raddatz, Jonathan Karl, Mary Bruce and Rachel Scott, and will begin at 8 p.m. ET, the network announced.

Keeping up with Election 2024? Sign up for USA TODAY's On Politics newsletter.

What are the qualification requirements for tonight's debate?

Here are the candidate qualification requirements for the debate, according to ABC News:

  • Must meet the requirements outlined in Article II, Section I of the U.S. Constitution to serve as president
  • Must have filed a Statement of Candidacy with the Federal Election Commission
  • Must appear on a sufficient number of state ballots, as certified by the Secretary of State or the relevant election authority in each state, to attain a majority (270) of electoral votes in the presidential election by Sept. 3, 2024
  • Participants must agree to accept the rules and format of the debate, as formulated by ABC News
  • Polls must be conducted using probability sampling by one of the following entities or pairs of entities: ABC News, CNN, Fox News, NBC News, The New York Times/Siena College, Quinnipiac University, The Wall Street Journal and The Washington Post.
  • The four qualifying polls must be conducted by different organizations.
  • Polls must be fielded and released between Aug. 1, 2024, and Sept. 3, 2024. Only polls released publicly and fielded entirely inside the window will qualify.

Gabe Hauari is a national trending news reporter at USA TODAY. You can follow him on X  @GabeHauari  or email him at [email protected].

  • Nation & World
  • Environment
  • Coronavirus

Columbus City Council briefed on data breach: Here's what we learned

assignment on computer data

Columbus City Councilmembers said Monday they learned the seriousness of the city's data breach of personal information just like the rest of us — through a tech-savvy local whistleblower who went to the Columbus news media.

"Trust me, I'm also angry," said Council President Shannon Hardin. "My family's personal information and my personal information is floating out there, and unfortunately I had to find that out from the (news) media as well. It's terrible. You can feel it in your stomach."

"I think that there have been elements of this that have been moving so quickly that all of us learned from the media," particularly that personal identifying information was stolen and readable, said Councilmember Nick Bankston, chair of the council committee that oversees technology issues.

In Council's first media briefing since the July 18 attack was discovered, Bankston said Monday that Mayor Andrew J. Ginther originally told the Council only that "there was some type of incident. That was before we knew that it was a ransomware attack.

"As far as finding out that it was an actual ransom attack, we found out only hours before the media found out and the public found out."

The local news media, including The Dispatch, was fed detailed information about the impact of the breach largely from the whistleblower and cybersecurity expert, David L. Ross Jr., who goes by "Connor Goodwolf" on issues related to the dark web. At Monday's meeting, Ross sat in the audience as the City Council — largely silent on the issue for more than seven weeks —questioned Ginther's chief technology officer, Sam Orth, on details of the attack.

Orth answered councilmembers' questions for about a half-hour before heading up the stairs at the back of the chamber to the audience balcony — and quickly out a door marked with a sign "Emergency Exit ONLY" to a hallway on the floor above. The move was an apparent bid to evade a large contingent of Columbus news reporters waiting to question him outside the ornate chamber doors used by the public.

But before he left, Orth revealed some key details the Ginther administration had previously not disclosed since July 18, the day city IT workers noticed something amiss with its highly integrated data systems , including:

  • IT workers are still trying to bring apparently hundreds of various remaining systems back on line.
  • Roughly 23% of the city's computer systems are still down, while another 7% have been only partially restored.
  • The data stolen by foreign cybercriminals represented personal identifying information related to hundreds of thousands of people, including city residents and employees — so many it would be too hard to notify each individually.
  • In the wake of the massive breach, the city will reevaluate what information it demands from citizens and employees and how long it retains it, to lower its exposure to attacks.
  • And details provided by Ross — that the hacked data contained the identities of juvenile victims, undercover police officers, confidential police informants, driver licenses, employee information, Social Security numbers and more — is just "some," but not all of what got stolen. The city is still evaluating the extent of the damage, Orth said.

Ross declined to provide more information on his legal situation with the city Monday. He is facing a city lawsuit for having downloaded and telling reporters the types of information that got hacked and had been put out on the dark web. Much of Ross' information contradicted Ginther's all-is-well claims that the hacked info was encrypted and useless, and that his city IT department had acted heroically to thwart the attack and protect Columbus.

Ross did say Monday he is close to hiring an attorney. It was largely Ross who informed the public of the massiveness of the breach and the risk to their personal information being stolen. However, Ross has now been muzzled by a Franklin County judge and is being sued by the city for potentially hundreds of thousands of dollars .

Klein's civil lawsuit against Ross states that by Ross looking at what the city had allowed to get published worldwide, and then informing the public of the damage through the news media, Ross broke laws pertaining to receiving stolen property, disseminating confidential law enforcement information, attempted to intimidate victims and witnesses "to criminal acts," caused a serious public inconvenience and alarm" and failed to act "as would a reasonably prudent person," among other allegations.

Rhysida, a global cybercriminal group, initially  posted the stolen data for auction  on the dark web in late July, asking for 30 Bitcoin, or around $1.6 million, for the information. The city refused to pay, Ginther has said.

Hours after Ross proved Ginther's statements that the data was encrypted and thus still safe to be incorrect , Ginther offered free credit monitoring to every resident in the city and anyone who has interacted with the city in a way that could have recorded personal data, including the city attorney's office for issues like traffic tickets and car impounds. The monitoring may ultimately cost city taxpayers millions of dollars, Ginther has said.

Hardin said that City Council intends to hold a public hearing questioning the administration on the hack, but hasn't yet set a date. A Council spokesman said it may not happen until early October.

While Council asked numerous questions, it didn't get into whether the city's IT department was prepared for such an attack, had encrypted all "data at rest" for protection, and had properly implemented a "hyper-converged infrastructure," or HCI system, several years ago that experts said can be akin to putting all your cybereggs into one cyberbasket stored on the cloud.

It also didn't ask Orth whether it was he who had told Ginther that the stolen data was encrypted and useless , leaving it still a mystery how that incorrect information was released to the public by the mayor.

The city charter gives City Council broad powers to investigate city operations. Sec. 33, titled "Investigations by Council," states the elected legislative body may investigate "official acts and conduct of any city official, relative to any matter upon which the council may act," and to "secure information upon any matter within its authority."

Council is responsible for funding all IT initiatives.

In addition to the public hearing, the Council intends on receiving an update from the administration on the cyberbreach at each regular meeting until further notice, in order to stay on top of what has proven a dynamic situation.

"I think that accountability is really for us (on the Council) at getting at the heart of the matter," Bankston said.

Bankston said Council also wants to make sure "what are we doing to go forward, to make sure that we are securing our data."

Bankston pledged to push the administration to provide answers for the public. As the Council, the police chief and other city officials have said little since the hack was revealed when pressed for more information about the extent of the damage, but Bankston added: "But also, as we said, we can't compromise the investigation."

He said the timing of Monday's question-and-answer session with Orth happened as a result of City Council having just returned for its first regular meeting since its August recess.

[email protected]

@ReporterBush

IMAGES

  1. Data Structure Assignment

    assignment on computer data

  2. Assignment of Computer Fundamentals

    assignment on computer data

  3. Assignment: Modify Access Database

    assignment on computer data

  4. Computer Assignment

    assignment on computer data

  5. Assignment 2

    assignment on computer data

  6. Businesswoman Freelancer Working with Computer. Woman Analyzing Data

    assignment on computer data

VIDEO

  1. DAY 05

  2. Data and Memory

  3. ASSIGNMENT COMPUTER SCIENCE : SCIENTIFIC METHOD

  4. Video Assignment

  5. CTA assignment Computer communication networks PART 1

  6. INVENTORY MANAGEMENT SYSTEM(COMPUTER PROGRAMMING)

COMMENTS

  1. Assignment (computer science)

    Assignment (computer science) In computer programming, an assignment statement sets and/or re-sets the value stored in the storage location (s) denoted by a variable name; in other words, it copies a value into the variable. In most imperative programming languages, the assignment statement (or expression) is a fundamental construct.

  2. Assignments

    Full assignments, including python and LaTeX files, with solutions for 6.006 Introduction ... Electrical Engineering and Computer Science; As Taught In ... Engineering. Computer Science. Algorithms and Data Structures; Theory of Computation; Mathematics. Computation. Learning Resource Types theaters Lecture Videos. assignment_turned_in Problem ...

  3. Assignments

    Introduction to Computer Science and Programming in Python. Menu. More Info Syllabus ... Algorithms and Data Structures; Programming Languages ... notes Lecture Notes. theaters Lecture Videos. assignment_turned_in Programming Assignments with Examples. Download Course. Over 2,500 courses & materials Freely sharing knowledge with learners and ...

  4. Assignments

    Assignments. pdf. 98 kB Getting Started: Python and IDLE. file. 193 B shapes. file. 3 kB subjects. file. 634 kB words. pdf. 52 kB ... Computer Science. Programming Languages; Over 2,500 courses & materials Freely sharing knowledge with learners and educators around the world.

  5. PDF Data Communication and Networks: Assignment #6

    Assignment is neatly assembled on 8 1/2 by 11 paper. Cover page with your name (last name first followed by a comma then first name), username and section number with a signed statement of independent effort is included. Program and documentation submitted for Assignment #8 are satisfactory. File name is correct.

  6. Computer Fundamentals

    7. Data is represented in the form of graphs, numbers, figures, or statistics. Information is presented in the form of words, language, thoughts, and ideas. 8. Data does not have context. Information has context. 9. It can be considered as a single unit that is unprocessed. It is a product and a collection of data.

  7. Computer Fundamentals Tutorial

    Functionalities of Computer. Any digital computer performs the following five operations: Step 1 − Accepts data as input. Step 2 − Saves the data/instructions in its memory and utilizes them as and when required. Step 3 − Execute the data and convert it into useful information. Step 4 − Provides the output.

  8. Introduction to Computer: Learn Definition, Examples and Types

    A Computer requires hardware devices and an operating system in order to be operational. 1. Hardware Devices. Monitor: It is a big television-like screen. It is an output device where you see what is happening on the computer. Keyboard: It is an input device. It is a way of giving commands to a computer with the help of keys over it.

  9. What are Assignment Statement: Definition, Assignment Statement ...

    Assignment Statement. An Assignment statement is a statement that is used to set a value to the variable name in a program. Assignment statement allows a variable to hold different types of values during its program lifespan. Another way of understanding an assignment statement is, it stores a value in the memory location which is denoted.

  10. Computer Science 303

    Computer Science 303 - Assignment 1: Database System. Matt has a Bachelor of Arts in English Literature. Managing a database can be challenging. This assignment helps students explore database ...

  11. Assignment Operators in Programming

    Assignment operators are used in programming to assign values to variables. We use an assignment operator to store and update data within a program. They enable programmers to store data in variables and manipulate that data. The most common assignment operator is the equals sign (=), which assigns the value on the right side of the operator to ...

  12. Data Communication and Network: Assignment #2

    Printout of the file(s) included in your assignment (archive) file. The cover page supplied on the next page must be the first page of your assignment file Fill in the blank area for each field. NOTE: The sequence of the hardcopy submission is: 1. Cover sheet. 1. Assignment Answer Sheet(s) VII. Sample Cover Sheet

  13. Computer Organization and Architecture Tutorial

    Computer Organization and Architecture is used to design computer systems. Computer Architecture is considered to be those attributes of a system that are visible to the user like addressing techniques, instruction sets, and bits used for data, and have a direct impact on the logic execution of a program, It defines the system in an abstract ...

  14. Assignment 1 Intro To Computer

    Assignment 1 Intro to Computer - Free download as PDF File (.pdf), Text File (.txt) or read online for free. This document provides a matching exercise that pairs computer terminology with definitions. Terms include input devices, database, output devices, monitor, RAM, operating system, spreadsheet, peripheral, electronic presentation, storage devices, printer, network, CPU, optical storage ...

  15. Assignment 2: Searching Text & String Data

    About this Assignment. In this course, you have learned about text as a data structure, string searching algorithms, trie data structure, and methods for compressing texts. For this assignment ...

  16. Computer Science 303: Database Management

    Lesson 2 - Computer Science 303 - Assignment 2: Database Normalization Computer Science 303 - Assignment 2: Database Normalization Text Lesson Course Practice Test

  17. Computer Assignment

    Computer Assignment - Free download as Word Doc (.doc / .docx), PDF File (.pdf), Text File (.txt) or read online for free. A computer is an electronic device that processes data into information. It consists of both hardware and software. The hardware includes components like the central processing unit, memory, mass storage devices, input/output devices, and buses that allow communication.

  18. DNA sequence alignment: An assignment for OpenMP, MPI, and CUDA/OpenCL

    We present an assignment for a full Parallel Computing course. Since 2017/2018, we have proposed a different problem each academic year to illustrate various methodologies for approaching the same computational problem using different parallel programming models. They are designed to be parallelized using shared-memory programming with OpenMP, distributed-memory programming with MPI, and GPU ...

  19. Assignments

    Many of the problem assignments are from the course textbook: Bertsekas, Dimitri, and Robert Gallager. Data Networks (2nd Edition). Upper Saddle River, NJ: Prentice Hall, 1991. ISBN: 0132009161. The relevant reading assignments for each problem set are also from the textbook.

  20. Storage technology explained: Vector databases at the core of AI

    Artificial intelligence (AI) processing rests on the use of vectorised data. In other words, AI turns real-world information into data that can be used to gain insight, searched for and manipulated.

  21. The Challenges Of Machine Learning And Computer Vision Adoption In

    It's beneficial for training computer vision models with limited labeled data. Integration With Legacy Systems Some legacy systems in factories and other manufacturing facilities aren't ...

  22. Basics of Computer and its Operations

    A computer is an electronic device that can receive, store, process, and output data. It is a machine that can perform a variety of tasks and operations, ranging from simple calculations to complex simulations and artificial intelligence. Computers consist of hardware components such as the central processing unit (CPU), memory, storage devices ...

  23. America has the fewest jobs available since January 2021

    Job openings fell in July for the second consecutive month to an estimated 7.67 million, from 7.91 million in June, according to new data released Wednesday by the Bureau of Labor Statistics.

  24. Google says it's made a quantum computing breakthrough that reduces

    The company's surface code technique allows its quantum bits to faithfully store and manipulate data for longer, which could pave the way for useful quantum computers. Google researchers claim ...

  25. Harris v Trump: 2024 presidential election prediction model

    Our prediction model shows the chances Kamala Harris and Donald Trump have of winning the contest to be America's next president

  26. New RAMBO attack steals data using RAM in air-gapped computers

    A novel side-channel attack dubbed "RAMBO" (Radiation of Air-gapped Memory Bus for Offense) generates electromagnetic radiation from a device's RAM to send data from air-gapped computers.

  27. Data Types in Programming

    There are various kind of data types available according to the various kind of data available. Data types are of 3 types. Primitive Data type: int, float, char, bool. Composite Data Types: string, array, pointers. User Defined Data Type. Summer-time is here and so is the time to skill-up!

  28. Columbus' computer system may have worsened data breach

    City officials have not disclosed many details on the hack, and on Friday a spokesperson for Mayor Andrew J. Ginther wouldn't say whether HCI and "single pane of glass" control over the city's ...

  29. What time is the debate tonight? How to watch Trump, Harris face off

    The first debate between former President Donald Trump and Vice President Kamala Harris is scheduled to take place Tuesday night at the National Constitution Center in Philadelphia with less than ...

  30. Columbus data breach: Here's what's still down weeks after cyber attack

    Mayor Andrew J. Ginther's IT chief Sam Orth answered council questions about July's massive data breach, then left via a balcony "emergency exit." ... Roughly 23% of the city's computer systems ...