CIS110-Distributed and Parallel Computing Technologies
Question 2 (Parallel Programming)
a) Parallel computing in programs are supported by modern programming languages and this is due to some of the features added to the programming languages and they are discussed below.
Thread is one of the most important features of parallel programming. Thread is the smallest unit of a process. When the thread function is implemented in the program, it splits a process into a number of threads and each thread can run in parallel. Each thread can equally execute the process given under the thread function. In OpenMP, the thread function is defined using the following statement.
#pragma omp parallel
The set of features in OpenMP which supports parallel programming are given below.
Private and shared variables
Within the parallel region, the variables can be declared as private or shared. If the variable need to be visible and accessed by all the threads simultaneously, then the variable can be declared as shared within the parallel region. When we declare a variable as private, the variable will be private to each thread and each thread will have its own copy of the variable.
Parallelizing the for loop
The for loops can be parallelized in OpenMP using the statement
#pragma omp for
This statement will be added before the for loop statement. This statement divides the iterations of the loop among the number of threads. The operations inside the for loop will be executed simultaneously by the threads.
Critical Code
In OpenMP, a part of the code is defined as a critical section. When a part of code is declared as critical, it can be executed by all the threads but not at the same time. The operations in the critical sections will be executed one thread at a time. This will be helpful to avoid the collision of threads updating a global variable.
Reduction
Reduction is used when the results of many thread processes are combined into an end result. This is similar to the map-reduce strategy.
Barrier
The barrier will be used for restricting the progress of the execution of threads from a particular point. Each thread will be suspended until all the threads reach the barrier point.
- b) The problems in parallel programming using thread and lock functions may be due to the use of these functions in inappropriate places. This will lead to incorrect results or deadlock problems.
Following are the problems or issues in parallel programs when the program is not properly parallelized.
- Thread functions execute in any order. When using the thread functions for parallelizing the loops. The operations inside the loop should be independent for each operation. If the result of each iteration depends on the result of the previous iteration. Then the result will be wrong since the threads will execute simultaneously. Else the function should use the appropriate access specifiers before the variables.
- The lock functions can be used when the variable is accessed by all the threads. When the variable needs to be accessed by one thread at a time, the lock function can be used on the variable. Sometimes this function will create a false sharing problem which can affect the performance of the program.
The program can be rewritten in the following ways:
- Avoid doing any processing inside the constructs which deals with the user interface processes.
- Use the thread function on the for loops by analysing the operations inside the for loops and use shared or private properties for the variables.
- Use lock functions by whether it leads to a deadlock situation.
Question 3 (Cloud Computing)
- The cloud-native solution for the given CMS using WordPress and it uses the Amazon Elastic Kubernetes service. The problem here is the system has reached its maximum limit and the system needs to scale up to handle the user requests and avoid the unavailability of services. So the company needs to upgrade the system without shutting down the CMS. This upgrade will be helpful for handling future migration and updates.
In cloud-native applications, the services are bundled as a lightweight container. These containers can have the ability to scale out and scale in quickly. The services are loosely coupled in the cloud-native applications and hence the microservices are independent of each other which makes the modification of one service is easy without affecting the other services.
- The cloud-native solution of the content management system will contain the application will be divided into a set of microservices based on the functions. This transition will make the application run without shutting down when there is a problem in one microservice.
This will make the upgradation to be performed in an easy manner without affecting the user access. This architecture contains three microservices separately for each service of the application
. A separate microservice for the text content, a microservice for file storage and a microservice for live streaming of data is defined. When the systems need to be upgraded, the process can be separated for each service without affecting the other services and the users can access the system without any downtime. The container orchestration will be helpful to handle the workload peaks by scaling up and scaling down the resources.
The containers play a major role in cloud-native solutions. The load balancing is also done in the cloud-native solutions. Instead of overloading the resource, the load can be distributed among the multiple resources. This will be helpful to improve the performance of the system.
These are actually great ideas in about blogging. You have touched some fastidious pointshere. Any way keep up wrinting.
https://suba.me/