Adding Parallelism to Sequential Programs – a Combined Method

Wiktor Bohdan Daszczuk, Denny Bogdan Czejdo, Wojciech Grześkowiak


The article outlines a contemporary method for creating software for multi-processor computers. It describes the identification of parallelizable sequential code structures. Three structures were found and then carefully examined. The algorithms used to determine whether or not certain parts of code may be parallelized result from static analysis. The techniques demonstrate how, if possible, existing sequential structures might be transformed into parallel-running programs. A dynamic evaluation is also a part of our process, and it can be used to assess the efficiency of the parallel programs that are developed. As a tool for sequential programs, the algorithms have been implemented in C#. All proposed methods were discussed using a common benchmark.

Full Text:



C. Campbell, R. Johnson, A. Miller, and S. Toub, Parallel Programming with Microsoft .NET: Design Patterns for Decomposition and Coordination on Multicore Architectures, 1st ed. Redmond, WA: Microsoft Press, 2010. ISBN: 978-0-7356-5159-3

ECMA_International, “C# language speification,” 2002. [Online]. Available:

A. J. Bernstein, “Analysis of Programs for Parallel Processing,” IEEE Trans. Electron. Comput., vol. EC-15, no. 5, pp. 757–763, Oct. 1966. DOI:

G. M. Amdahl, “Validity of the single processor approach to achieving large scale computing capabilities,” in spring joint computer conference on - AFIPS ’67 Atlantic City, NJ, April 18-20, 1967, 1967, p. 483. DOI:

J. L. Gustafson, “Reevaluating Amdahl’s law,” Commun. ACM, vol. 31, no. 5, pp. 532–533, May 1988. DOI:

M. Danelutto, J. D. Garcia, L. M. Sanchez, R. Sotomayor, and M. Torquati, “Introducing Parallelism by Using REPARA C++11 Attributes,” in 24th Euromicro International Conference on Parallel, Distributed, and Network-Based Processing (PDP), Heraklion, Greece, 17-19 Feb 2016, 2016, pp. 354–358. DOI:

B. Chapman, G. Jost, and R. van der Pas, Using OpenMP - Portable Shared Memory Parallel Programming. Cambridge, MA: MIT Press, 2007. ISBN: 978-0-262-53302-7

R. Atre, A. Jannesari, and F. Wolf, “Brief Announcement: Meeting the Challenges of Parallelizing Sequential Programs,” in 29th ACM Symposium on Parallelism in Algorithms and Architectures, Washington, DC, 24 - 26 July 2017, 2017, pp. 363–365. DOI:

D. Dig, “A Refactoring Approach to Parallelism,” IEEE Softw., vol. 28, no. 1, pp. 17–22, Jan. 2011. DOI:

Z. Li, “Discovery of Potential Parallelism in Sequential Programs,” PhD thesis, Technische Universität Darmstadt, Department of Computer Science, 2016. [Online] Available:

F. Allen, M. Burke, R. Cytron, J. Ferrante, and W. Hsieh, “A framework for determining useful parallelism,” in 2nd international conference on Supercomputing - ICS ’88, St. Malo, France, 1 June 1988, 1988, pp. 207–215. DOI:

T.-W. Huang, C.-X. Lin, G. Guo, and M. Wong, “Cpp-Taskflow: Fast Task-Based Parallel Programming Using Modern C++,” in 2019 IEEE International Parallel and Distributed Processing Symposium (IPDPS), Rio de Janeiro, Brazil, 20-24 May 2019, 2019, pp. 974–983. DOI:

Hongtao Zhong, Mojtaba Mehrara, Steve Lieberman, and Scott Mahlke, “Uncovering hidden loop level parallelism in sequential applications,” in 2008 IEEE 14th International Symposium on High Performance Computer Architecture, Salt Lake City, UT, 16-20 Feb 2008, 2008, pp. 290–301. DOI:

A. W. Lim and M. S. Lam, “Maximizing parallelism and minimizing synchronization with affine partitions,” Parallel Comput., vol. 24, no. 3–4, pp. 445–475, May 1998. DOI:

Z. Li, R. Atre, Z. Huda, A. Jannesari, and F. Wolf, “Unveiling parallelization opportunities in sequential programs,” J. Syst. Softw., vol. 117, pp. 282–295, Jul. 2016. DOI:

G. Tagliavini, D. Cesarini, and A. Marongiu, “Unleashing Fine-Grained Parallelism on Embedded Many-Core Accelerators with Lightweight OpenMP Tasking,” IEEE Trans. Parallel Distrib. Syst., vol. 29, no. 9, pp. 2150–2163, Sep. 2018. DOI:

S. Rul, H. Vandierendonck, and K. De Bosschere, “A profile-based tool for finding pipeline parallelism in sequential programs,” Parallel Comput., vol. 36, no. 9, pp. 531–551, Sep. 2010. DOI:

A. Fonseca, B. Cabral, J. Rafael, and I. Correia, “Automatic Parallelization: Executing Sequential Programs on a Task-Based Parallel Runtime,” Int. J. Parallel Program., vol. 44, no. 6, pp. 1337–1358, Dec. 2016. DOI:

Z.-H. Du, C.-C. Lim, X.-F. Li, C. Yang, Q. Zhao, and T.-F. Ngai, “A cost-driven compilation framework for speculative parallelization of sequential programs,” ACM SIGPLAN Not., vol. 39, no. 6, pp. 71–81, Jun. 2004. DOI:

Chen Tian, Min Feng, V. Nagarajan, and R. Gupta, “Copy or Discard execution model for speculative parallelization on multicores,” in 41st IEEE/ACM International Symposium on Microarchitecture, Como, Italy, 08-12 Nov. 2008, 2008, pp. 330–341. DOI:

Microsoft, “Thread-Safe Collections.” [Online]. Available:

M. Harman and R. Hierons, “An overview of program slicing,” Softw. Focus, vol. 2, no. 3, pp. 85–92, 2001. DOI:

Y. Shen, M. Peng, S. Wang, and Q. Wu, “Towards parallelism detection of sequential programs with graph neural network,” Futur. Gener. Comput. Syst., vol. 125, pp. 515–525, Dec. 2021. DOI:

OpenAI, “ChatGPT: Optimizing Language Models for Dialogue.” [Online]. Available:

A. Barredo Arrieta et al., “Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI,” Inf. Fusion, vol. 58, pp. 82–115, Jun. 2020. DOI:

D. B. Czejdo, W. B. Daszczuk, and W. Grześkowiak, “Practical Approach to Introducing Parallelism in Sequential Programs,” in 18th International Conference on Dependability of Computer Systems DepCoS-RELCOMEX, Brunów, Poland, 3-7 July 2023, 2023, pp. 13–27. DOI:


  • There are currently no refbacks.

International Journal of Electronics and Telecommunications
is a periodical of Electronics and Telecommunications Committee
of Polish Academy of Sciences

eISSN: 2300-1933