In addition to our work on web development features, we dedicate our time to researching parallel programming and FaaS applications on the Edge.
This stems from our education and passion for making these “high-level” features of computer science accessible to a greater community.
Increasing the accessibility of parallel and edge computing, one step at a time.
mpiPython & HPPython
Python has become a key language for users of all skill levels due to its clean syntax and extremely accessible usage. However, despite its implications in physics, the medical community, and much more—it struggles to keep up with the speed of low-level programming languages such as Java and C.
Our MPI (Message Passing Interface)-backed library of Python and our eventual superset language, HPPython, aims to mitigate this difference in speed through parallel computing. The world of big data needs millions of calculations every second. HPPython gives you both speed and ease of use.
Developed from 2020 to 2024, our team continues to work on the parallel computing prowess of our libraries and superset language.
Click here for: mpiPython installation package
mpiPython: Extensions of Collective Operations
Despite performance limitations due to its interpreted nature, Python remains a dominant language among scientists and engineers. Enhancing its capabilities for parallel programming unlocks significant potential within parallel and cloud computing environments. mpiPython, a Python binding for message-passing interfaces, empowers Python for Single Program Multiple Data (SPMD) execution, enabling efficient parallel computations. Additionally, Python’s inherent accessibility and versatility foster a growing demand for scaling and parallelizing it on distributed cloud environments. This paper extends mpiPython, bridging the gap in collective operations for parallel computing. The extension builds upon the original mpiPython’s class-based structure, emphasizing two core principles: supporting vanilla Python with MPI and focusing on a C-based CPU-focused implementation. Unlike existing implementations like mpi4py, mpiPython directly interacts with the Python C API, offering greater control. Two new functions, MPI Gather and MPI Reduce, significantly improve efficiency and streamline collective operations between working nodes. The results demonstrate mpiPython’s ability to perform at the level of other libraries while prioritizing a simple implementation accessible to a broad range of users.
Published in: (ICICT ’24) 2024 7th International Conference on Information and Computer Technologies
HPPython: Extending Python with HPspmd for Data Parallel Programming
In light of previous endeavors and trends in the realm of parallel programming, HPPython emerges as an essential superset that enhances the accessibility of parallel programming for developers, facilitating scalability across multiple nodes. Despite Python’s popularity as a programming language in scientific and engineering applications and its native support for executing various processes, HPPython brings substantial simplification to the development of parallel programs and empowers program distribution across heterogeneous clusters consisting of multiple physical computers. HPPython leverages the MPI standard for its underlying communication, thereby harnessing the benefits of the SPMD model. Additionally, HPPython introduces novel syntax and constructs, such as parallel loops and distributed lists, while endeavoring to retain the natural essence of the original language. This paper delves into the distinct components of HPPython and elucidates their integration, establishing HPPython as a viable solution for parallel programming in today’s data-driven world.
Published in: (ISCAI ’23) 2023 2nd International Symposium on Computing and Artificial Intelligence
FaaS Deployment paired with Edge Computing
Have you ever used…
- Siri or Alexa?
- Netflix?
- Google Translate?
These services all utilize Function-as-a-Service (FaaS)! FaaS is a cloud computing model that’s revolutionizing how developers build and deploy applications. It allows developers to focus solely on writing individual functions—small, single-purpose pieces of code—that are automatically triggered by specific events. These functions spring to life when needed, execute their task, and then disappear, with the cloud provider handling all the behind-the-scenes complexity.
FaaS is traditionally a deployment structure designed around the cloud. However, FaaS in Edge Computing paradigms is a transformative approach that combines the flexibility of serverless architecture with the proximity of edge devices. This fusion allows for ultra-low latency execution of functions right where data is generated or consumed, enabling applications to respond instantly to local events. By distributing functions across edge nodes, it creates a highly resilient and scalable network of compute resources that can operate even with intermittent cloud connectivity. This paradigm is particularly powerful for IoT scenarios, real-time analytics, and applications requiring immediate data processing, opening up new possibilities for responsive and efficient distributed systems.
Click here for: ComFaaS Github Repository
ComFaaS: Comparative Analysis of Edge Computing with Function-as-a-Service
This research paper presents a comprehensive comparison between cloud computing and edge computing in the context of function-as-a-service (FaaS) applications. The project, ComFaaS, aims to evaluate the performance and efficiency of these computing paradigms by conducting benchmark programs with edge-server connections simulating event-triggered executions. The experimental setup involves a cloud computing model where programs are selected from the cloud, and an edge computing model where programs are requested by the edge. The results of this study provide valuable insights into the suitability and effectiveness of cloud and edge computing for real-world applications utilizing FaaS.
Published in: (CCIOT ’23) 2023 8th International Conference on Cloud Computing and Internet of Things
ComFaaS Distributed: Edge Computing with Function-as-a-Service in Parallel Cloud Environments
Function-as-a-Service (FaaS) has emerged as a revolutionary service platform, abstracting the complexities of hardware, operating systems, and web hosting services. This allows developers to focus solely on implementing their service applications, making FaaS an ideal platform for the scalable manipulation of large data sets. Traditionally deployed on the cloud, FaaS now faces a new frontier: the network edge. Leveraging the edge offers several potential benefits, including reduced latency and improved resource utilization, making it a promising approach for efficient FaaS deployment. As the daily volume and complexity of data we handle continues to grow, adopting a parallel computing paradigm has become increasingly important to ensure fast and efficient execution of computational tasks. Addressing this need, ComFaaS Distributed embarks on a comprehensive comparison of the capabilities of parallelized edge and cloud environments for FaaS deployment. Utilizing benchmark programs meticulously crafted to simulate event-triggered scenarios, ComFaaS Distributed aims to provide valuable insights into the performance and potential of FaaS at the edge, paving the way for a future where parallel computing empowers the efficient and scalable processing of ever-growing data volumes.
Published in: (ICICT ’24) 2024 7th International Conference on Information and Computer Technologies