CIS110-Distributed and Parallel Computing Technologies

Question 2 (Parallel Programming)


The parallel computing in programs are supported by the modern programming languages and this is due to the some of the features added to the programming languages and they are discussed below.

Thread is the one of the most important feature of the parallel programming. Thread is the smallest unit of a process. When the thread function is implemented in the program, it splits a process into number of threads and each thread can run in parallel. Each thread can equally execute the process given under the thread function. In OpenMp, the thread function defined using the following statement.

#pragma omp parallel

The set of features in OpenMP which supports the parallel programming are given below.

Private and shared variables

Within the parallel region, the variables can be declared as private or shared. If the variable need to be visible and accessed by all the threads simultaneously, then the variable can be declared as shared within the parallel region. When we declare a variable as private, the variable will be private to each thread and each thread will have the own copy of the variable.


Parallelizing the for loop

The for loops can be parallelized in OpenMP using the statement

#pragma omp for

This statement will be added before the for loop statement. This statement divides the iterations of the loop among the number of threads. The operations inside the for loop will be executed simultaneously by the threads.

Critical Code

In OpenMP, a part of the code is defined as critical section. When a part of code is declared as critical, it can be executed by all the threads but not at the same time. The operations in the critical sections will be executed one thread at a time. This will be helpful to avoid the collision of threads updating a global variable.


Reduction is used when results of many thread processes are combined into a end result. This is similar to map reduce strategy.


Barrier will be used for restricting the progress of execution of threads from a particular point. Each thread will be suspended until all the threads to reach the barrier point.

  1. b) The problems in parallel programming using thread and lock functions may be due to the use of these functions in inappropriate places. This will lead to incorrect result or deadlock problems.

Following are the problems or issues in parallel programs when the program is not properly parallelized.

  1. Thread functions execute in any order. When using the thread functions for parallelizing the loops. The operations inside the loop should be independent for each operation. If the result of each iteration depends on the result of previous iteration. Then the result will be wrong since the threads will execute simultaneously. Else the function should use the appropriate access specifiers before the variables.
  2. The lock functions can be used when the variable is accessed by all the threads. When the variable needs to be accessed by one thread at a time, the lock function can be used on the variable. Sometimes this function will create the false sharing problem which can affect the performance of the program.

The program can be rewritten in the following ways:

  1. Avoid doing any processing inside the constructs which deals with the user interface processes.
  2. Use the thread function on the for loops by analysing the operations inside the for loops and use shared or private properties for the variables.
  3. Use lock functions by whether it leads to dead lock situation.


Question 3 (Cloud Computing)

  1. The cloud native solution for the given CMS using wordpress and it uses the Amazon Elastic Kubernetes service. The problem here is the system has reached its maximum limit and the system needs scale up to handle the user requests and avoid the unavailability of services. So the company needs to upgrade the system to without shutting down the CMS. This upgrade will be helpful for handling the future migration and updates.

In the cloud native applications, the services are bundled as a light weight containers. These containers can have the ability to scale out and scale in quickly. The services are loosely coupled in the cloud native applications and hence the microservices are independent to each other which makes the modification of one service is easy without affecting the other services.

The cloud native solution of the content management system will contain the application will be divided into set of microservices based on the functions.  This transition will make the application run without shutting down when there is a problem in one microservice.

This will make the upgradation to be performed in an easy manner without affecting the user access. This architecture contains three microservices separately for each service of the application

. A separate microservice for the text content, a microservice for file storage and a microservice for live streaming of data is defined.  When the systems need to be upgraded, the process can be separated for each service without affecting the other services and the users can access the system without any downtime. The container orchestration will be helpful to handle the workload peaks by scaling up and scaling down the resources.


The containers plays major role in the cloud native solutions. The load balancing is also done in the cloud native solutions. Instead of overloading the resource, the load can be distributed among the multiple resources. This will be helpful to improve the performance of the system.


Leave a Comment