Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Parallel algorithm
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Algorithm which can do multiple operations in a given time}} {{More citations needed|date=November 2012}} In [[computer science]], a '''parallel algorithm''', as opposed to a traditional [[serial algorithm]], is an [[algorithm]] which can do multiple operations in a given time. It has been a tradition of [[computer science]] to describe serial algorithms in [[abstract machine]] models, often the one known as [[random-access machine]]. Similarly, many computer science researchers have used a so-called [[parallel random-access machine]] (PRAM) as a parallel abstract machine (shared-memory).<ref>{{cite web |title=Parallel Algorithms |first1=Guy E. |last1=Blelloch |first2=Bruce M. |last2=Maggs |publisher=School of Computer Science, [[Carnegie Mellon University]] |location=USA |access-date=2015-07-27 |url=https://www.cs.cmu.edu/~guyb/papers/BM04.pdf}}</ref><ref>{{cite web |title=Thinking in Parallel: Some Basic Data-Parallel Algorithms and Techniques, 104 pages |first=Uzi |last=Vishkin |publisher=Class notes of courses on parallel algorithms taught since 1992 at the University of Maryland, College Park, Tel Aviv University and the Technion |date=2009 |url=http://users.umiacs.umd.edu/~vishkin/PUBLICATIONS/classnotes.pdf}}</ref> Many parallel algorithms are executed [[concurrent computing|concurrently]] β though in general [[concurrent algorithm]]s are a distinct concept β and thus these concepts are often conflated, with which aspect of an algorithm is parallel and which is concurrent not being clearly distinguished. Further, non-parallel, non-concurrent algorithms are often referred to as "[[sequential algorithm]]s", by contrast with concurrent algorithms. ==Parallelizability== {{see also|Analysis of parallel algorithms}} Algorithms vary significantly in how parallelizable they are, ranging from easily parallelizable to completely unparallelizable. Further, a given problem may accommodate different algorithms, which may be more or less parallelizable. Some problems are easy to divide up into pieces in this way β these are called ''[[embarrassingly parallel]] problems.'' Examples include many algorithms to solve [[Rubik's Cube]]s and find values which result in a given [[associative array|hash]].{{citation needed|date=December 2019}} Some problems cannot be split up into parallel portions, as they require the results from a preceding step to effectively carry on with the next step β these are called '''{{visible anchor|inherently serial problem}}s'''. Examples include iterative [[numerical analysis|numerical methods]], such as [[Newton's method]], iterative solutions to the [[three-body problem]], and most of the available algorithms to compute [[pi]] (Ο).{{citation needed|date=August 2019}} Some sequential algorithms can be converted into parallel algorithms using [[automatic parallelization]].<ref name="MXian1997">{{cite book |author1=Megson G M|author2=Chen Xian|title=Automatic Parallelization For A Class Of Regular Computations|url=https://books.google.com/books?id=1wHtCgAAQBAJ&q=%22parallel+algorithm%22|date=4 January 1997|publisher=World Scientific|isbn=978-981-4498-41-8}}</ref> In many cases developing an effective parallel algorithm for solution of some task requires attraction of new ideas and methods comparing to creating a sequential algorithm version. These are, for instance, practically important problems of searching a target element in data structures, evaluation of an algebraic expression, etc.<ref>{{Cite book |last=Kurgalin |first=Sergei |title=The discrete math workbook: a companion manual using Python |last2=Borzunov |first2=Sergei |date=2020 |publisher=Springer Naturel |isbn=978-3-030-42220-2 |edition=2nd |series=Texts in Computer Science |location=Cham, Switzerland}}</ref> ==Motivation== Parallel algorithms on individual devices have become more common since the early 2000s because of substantial improvements in [[multiprocessing]] systems and the rise of [[multi-core]] processors. Up until the end of 2004, single-core processor performance rapidly increased via [[frequency scaling]], and thus it was easier to construct a computer with a single fast core than one with many slower cores with the same [[throughput]], so multicore systems were of more limited use. Since 2004 however, frequency scaling hit a wall, and thus multicore systems have become more widespread, making parallel algorithms of more general use. ==Issues== ===Communication=== The cost or complexity of serial algorithms is estimated in terms of the space (memory) and time (processor cycles) that they take. Parallel algorithms need to optimize one more resource, the communication between different processors. There are two ways parallel processors communicate, shared memory or message passing. [[Shared memory]] processing needs additional [[lock (computer science)|locking]] for the data, imposes the overhead of additional processor and bus cycles, and also serializes some portion of the algorithm. [[Message passing]] processing uses channels and message boxes but this communication adds transfer overhead on the bus, additional memory need for queues and message boxes and latency in the messages. Designs of parallel processors use special [[bus (computing)|buses]] like [[crossbar switch|crossbar]] so that the communication overhead will be small but it is the parallel algorithm that decides the volume of the traffic. If the communication overhead of additional processors outweighs the benefit of adding another processor, one encounters [[parallel slowdown]]. ===Load balancing=== {{main|Load balancing (computing)}} Another problem with parallel algorithms is ensuring that they are suitably [[load balancing (computing)|load balanced]], by ensuring that ''load'' (overall work) is balanced, rather than input size being balanced. For example, checking all numbers from one to a hundred thousand for primality is easy to split among processors; however, if the numbers are simply divided out evenly (1β1,000, 1,001β2,000, etc.), the amount of work will be unbalanced, as smaller numbers are easier to process by this algorithm (easier to test for primality), and thus some processors will get more work to do than the others, which will sit idle until the loaded processors complete. ==Distributed algorithms== {{main|Distributed algorithm}} {{expand section|date=February 2014}} A subtype of parallel algorithms, ''[[distributed algorithm]]s'', are algorithms designed to work in [[cluster computing]] and [[distributed computing]] environments, where additional concerns beyond the scope of "classical" parallel algorithms need to be addressed. ==See also== * [[Multiple-agent system]] (MAS) * [[Parallel algorithms for matrix multiplication]] * [[Parallel algorithms for minimum spanning trees]] * [[Parallel computing]] * [[Parareal]] ==References== {{Reflist}} ==External links== *[https://www.mcs.anl.gov/~itf/dbpp/ Designing and Building Parallel Programs], US Argonne National Laboratory {{Parallel computing}} {{Authority control}} [[Category:Parallel computing]] [[Category:Concurrent algorithms]] [[Category:Distributed algorithms]]
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Templates used on this page:
Template:Authority control
(
edit
)
Template:Citation needed
(
edit
)
Template:Cite book
(
edit
)
Template:Cite web
(
edit
)
Template:Expand section
(
edit
)
Template:Main
(
edit
)
Template:More citations needed
(
edit
)
Template:Parallel computing
(
edit
)
Template:Reflist
(
edit
)
Template:See also
(
edit
)
Template:Short description
(
edit
)
Template:Visible anchor
(
edit
)
Search
Search
Editing
Parallel algorithm
Add topic