answersLogoWhite

0


Best Answer

Save time and/or money: In theory, throwing more resources at a task will shorten its time to completion, with potential cost savings. Parallel clusters can be built from cheap, commodity components.

Provide concurrency: A single compute resource can only do one thing at a time. Multiple computing resources can be doing many things simultaneously

Use of non-local resources: Using compute resources on a wide area network, or even the internet when local compute resources are scarce.

Limits to serial computing: Both physical and practical reasons pose significant constraints to simply building ever faster serial computers:

  • Transmission speeds - the speed of a serial computer is directly dependent upon how fast data can move through hardware. Absolute limits are the speed of light (30 cm/nanosecond) and the transmission limit of copper wire (9 cm/nanosecond). Increasing speeds necessitate increasing proximity of processing elements.
  • Limits to miniaturization - processor technology is allowing an increasing number of transistors to be placed on a chip. However, even with molecular or atomic-level components, a limit will be reached on how small components can be.
  • Economic limitations - it is increasingly expensive to make a single processor faster. Using a larger number of moderately fast commodity processors to achieve the same (or better) performance is less expensive.
User Avatar

Deon Jast

Lvl 10
βˆ™ 3y ago
This answer is:
User Avatar
More answers
User Avatar

Wiki User

βˆ™ 14y ago

Save time and/or money: In theory, throwing more resources at a task will shorten its time to completion, with potential cost savings. Parallel clusters can be built from cheap, commodity components.

Provide concurrency: A single compute resource can only do one thing at a time. Multiple computing resources can be doing many things simultaneously

Use of non-local resources: Using compute resources on a wide area network, or even the internet when local compute resources are scarce.

Limits to serial computing: Both physical and practical reasons pose significant constraints to simply building ever faster serial computers:

  • Transmission speeds - the speed of a serial computer is directly dependent upon how fast data can move through hardware. Absolute limits are the speed of light (30 cm/nanosecond) and the transmission limit of copper wire (9 cm/nanosecond). Increasing speeds necessitate increasing proximity of processing elements.
  • Limits to miniaturization - processor technology is allowing an increasing number of transistors to be placed on a chip. However, even with molecular or atomic-level components, a limit will be reached on how small components can be.
  • Economic limitations - it is increasingly expensive to make a single processor faster. Using a larger number of moderately fast commodity processors to achieve the same (or better) performance is less expensive.
This answer is:
User Avatar

User Avatar

Wiki User

βˆ™ 16y ago

high speed

This answer is:
User Avatar

User Avatar

Wiki User

βˆ™ 7y ago

penis wby

This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: Advantages of parallel computing
Write your answer...
Submit
Still have questions?
magnify glass
imp
Related questions

What are the advantages and disadvantages of parallel computing?

One advantage to parallel computing is the ability to process information quicker. A disadvantage is maintaining the system because it is complex.


What are distributed and parallel computing?

Parallel computing and distributed computing are ways of exploiting parallelism in computing to achieve higher performance. Multiple processing elements are used to solve a problem, either to have it done faster or to have a larger size problem been solved. To state simply, if the processing elements share the memory, it is called parallel computing, other wise it is called distributed computing. Some have opinion that distributed computing is a special form of parallel computing.


What is the definition of parallel computing?

The definition of parallel computing is the processing of data many bits at a time as opposed to serial computing which is the processing of data one bit at a time.


Is grid computing an advanced computing?

"Distributed" or "grid" computing in general is a special type of parallel computing, it is advanced in the means of using distributed computing.


Which instruction set is used by the itaniums processors?

EPIC, which stands for Explicitly Parallel Instruction Computing.


What is the difference between supercomputer and distributed computing?

supercomputers allows both parallel and distributed computing


What is cost optimal algorithm in parallel computing?

The cost optimal algorithm in parallel computing is the modular structured parallel algorithm that satisfy the insatiable demand of low power consumption, reduces speed and minimum silicon area.


What is massivelly parallel computing?

Lots of processors all doing the same task simultaneously. For instance a graphics card will use massively parallel processing computing to render the display.


What are advantages of work group computing?

pie!


What are the advantages of optical computing?

pattern recognition


What is the difference between distributed and parallel computing?

In the most simple form = Parallel Computing is a method where several individual (autonomous) systems (CPU's) work in tandem to resolve a common computing workload. Distributed Computing is where several dis-associated systems are working seperatly to resolve a multi-faceted computing workload. An example of Parallel computing would be two servers that share the workload of routing mail, managing connections to an accounting system or database, solving a mathematical problem, ect... Distributed Computing would be more like the SETI Program, where each client works a separate "chunk" of information, and returns the completed package to a centralized resource that's responsible for managing the overall workload. If you think of ten men pulling on a rope to lift a load, that is parallel computing. If ten men have ten ropes and are lifting ten different loads from one place to consolidate at another place, that would be distributed computing. In Parallel Computing all processors have access to a shared memory. In distributed computing, each processor has its own private memory


What are the differences between parallel system and distributed system?

What is the difference between parallel computing and distributing computing? In the most simple form = Parallel Computing is a method where several individual (autonomous) systems (CPU's) work in tandem to resolve a common computing workload. Distributed Computing is where several dis-associated systems are working seperatly to resolve a multi-faceted computing workload. An example of Parallel computing would be two servers that share the workload of routing mail, managing connections to an accounting system or database, solving a mathematical problem, ect... Distributed Computing would be more like the SETI Program, where each client works a seperate "chunk" of information, and returns the completed package to a centralized resource that's responsible for managing the overall workload. If you think of ten men pulling on a rope to lift a load, that is parallel computing. If ten men have ten ropes and are lifting ten different loads from one place to consolidate at another place, that would be distributed computing.