In addition to our work on web development features, we dedicate our time to researching parallel programming and FaaS applications on the Edge.
This stems from our education and passion for making these “high-level” features of computer science accessible to a greater community.
Increasing the accessibility of parallel and edge computing, one step at a time.
mpiPython & HPPython
Python has become a key language for users of all skill levels due to its clean syntax and extremely accessible usage. However, despite its implications in physics, the medical community, and much more—it struggles to keep up with the speed of low-level programming languages such as Java and C.
Our MPI (Message Passing Interface)-backed library of Python and our eventual superset language, HPPython, aims to mitigate this difference in speed through parallel computing. The world of big data needs millions of calculations every second. HPPython gives you both speed and ease of use.
Developed from 2020 to 2024, our team continues to work on the parallel computing prowess of our libraries and superset language.
Click here for: mpiPython installation package
mpiPython: A Streamlined Python Collective Communication Library to MPI
Python’s popularity as an interpreted language, particularly among scientists and engineers, is due to its ease of use and flexibility, despite certain performance limitations. Enhancing Python for parallel programming opens significant opportunities in both parallel and cloud computing environments. mpiPython, a Python library designed for message-passing, enables Single Program Multiple Data (SPMD) execution and supports efficient parallel processing. One of its key advantages is its simplicity—both in installation and use—making it accessible to a wide range of users without compromising performance. This paper introduces an extended version of mpiPython, which addresses shortcomings in collective operations for parallel computing while maintaining the library’s strengths: ease of installation and minimal overhead. mpiPython is designed with a direct implementation in C, avoiding additional layers, unlike mpi4py, allowing for more efficient communication without unnecessary overhead. The latest version introduces key enhancements to collective communication, including MPI All-Gather, MPI_Send, and MPI_Receive functions. These additions significantly improve performance, particularly in large-scale distributed systems, while preserving mpiPython’s focus on being lightweight and user-friendly. Our results show that mpiPython offers competitive performance compared to other MPI libraries while remaining accessible to users who prioritize simplicity and efficiency. The latest fully functional release of mpiPython is now officially available, providing a streamlined solution for Python-based parallel computing.
Published in: (CCIOT ’24) 2024 9th International Conference on Cloud Computing and Internet of Things
mpiPython: Extensions of Collective Operations
Despite performance limitations due to its interpreted nature, Python remains a dominant language among scientists and engineers. Enhancing its capabilities for parallel programming unlocks significant potential within parallel and cloud computing environments. mpiPython, a Python binding for message-passing interfaces, empowers Python for Single Program Multiple Data (SPMD) execution, enabling efficient parallel computations. Additionally, Python’s inherent accessibility and versatility foster a growing demand for scaling and parallelizing it on distributed cloud environments. This paper extends mpiPython, bridging the gap in collective operations for parallel computing. The extension builds upon the original mpiPython’s class-based structure, emphasizing two core principles: supporting vanilla Python with MPI and focusing on a C-based CPU-focused implementation. Unlike existing implementations like mpi4py, mpiPython directly interacts with the Python C API, offering greater control. Two new functions, MPI Gather and MPI Reduce, significantly improve efficiency and streamline collective operations between working nodes. The results demonstrate mpiPython’s ability to perform at the level of other libraries while prioritizing a simple implementation accessible to a broad range of users.
Published in: (ICICT ’24) 2024 7th International Conference on Information and Computer Technologies
HPPython: Extending Python with HPspmd for Data Parallel Programming
In light of previous endeavors and trends in the realm of parallel programming, HPPython emerges as an essential superset that enhances the accessibility of parallel programming for developers, facilitating scalability across multiple nodes. Despite Python’s popularity as a programming language in scientific and engineering applications and its native support for executing various processes, HPPython brings substantial simplification to the development of parallel programs and empowers program distribution across heterogeneous clusters consisting of multiple physical computers. HPPython leverages the MPI standard for its underlying communication, thereby harnessing the benefits of the SPMD model. Additionally, HPPython introduces novel syntax and constructs, such as parallel loops and distributed lists, while endeavoring to retain the natural essence of the original language. This paper delves into the distinct components of HPPython and elucidates their integration, establishing HPPython as a viable solution for parallel programming in today’s data-driven world.
Published in: (ISCAI ’23) 2023 2nd International Symposium on Computing and Artificial Intelligence
mpiPython: Prospects for Node Performance
Python as an interpreted language is limited in performance by its ability to optimize code. With it being a high-level programming language, it’s still a strong choice for data scientists to learn and use. If Python could be optimized for parallel programming, its full potential in parallel and cloud computing environments could be achieved. mpiPython is a message-passing module that gives Python the ability to be used in SPMD (Single Program Multiple Data) environments. In this paper, we review basic features of mpiPython, including its runtime communication libraries and design strategies. mpiPython also has new features to help mpiPython programmers, that includes simplifying traditional MPI initialization. During the development of mpiPython, we realized that individual node performance of mpiPython is uncertain and critical. mpiPython node performance will be analyzed within the benchmarks.
Published in: (ICICT ’23) 2023 6th International Conference on Information and Computer Technologies
FaaS Deployment paired with Edge Computing
Have you ever used…
- Siri or Alexa?
- Netflix?
- Google Translate?
These services all utilize Function-as-a-Service (FaaS)! FaaS is a cloud computing model that’s revolutionizing how developers build and deploy applications. It allows developers to focus solely on writing individual functions—small, single-purpose pieces of code—that are automatically triggered by specific events. These functions spring to life when needed, execute their task, and then disappear, with the cloud provider handling all the behind-the-scenes complexity.
FaaS is traditionally a deployment structure designed around the cloud. However, FaaS in Edge Computing paradigms is a transformative approach that combines the flexibility of serverless architecture with the proximity of edge devices. This fusion allows for ultra-low latency execution of functions right where data is generated or consumed, enabling applications to respond instantly to local events. By distributing functions across edge nodes, it creates a highly resilient and scalable network of compute resources that can operate even with intermittent cloud connectivity. This paradigm is particularly powerful for IoT scenarios, real-time analytics, and applications requiring immediate data processing, opening up new possibilities for responsive and efficient distributed systems.
Click here for: ComFaaS Github Repository
ComFaaS is expanding its platform with a machine learning scheduler designed to improve how Function-as-a-Service workloads are routed across cloud and edge environments. Instead of relying on fixed, hand-tuned rules, the new scheduler learns latency behavior from real system signals such as transfer time, jitter, queueing, cold starts, and hardware differences to make placement decisions that adapt as conditions change. This upgrade preserves the flexibility of the existing ComFaaS runtime while adding data-driven intelligence for consistently faster and more stable execution across heterogeneous tiers. Click here: ComFaaS-ML Github Repository
Machine Learning-Augmented FaaS Scheduling on the Edge-Cloud Continuum
Function-as-a-Service (FaaS) continues to reshape how event-driven applications are developed and deployed, offering automatic scalability, high availability, and minimal operational overhead. Building on this foundation, we present the ComFaaS-ML Scheduler, a machine-learning–based scheduling module that replaces hand-crafted rules with predictive decisions across the edge-cloud continuum. Our design learns mean and tail (p95) latency from workload and system signals, prunes suboptimal placements at runtime, and selects targets that balance performance and stability without modifying the underlying platform. The scheduler integrates seamlessly with ComFaaS, preserving its flexibility and hybrid execution model while adding data-driven adaptivity for diverse, dynamic conditions. We evaluate two modes—pure-heur and pure-ML—under a unified five-seed protocol and report standard system-level metrics (Oracle Match, p95, ε-post, Total, and, where available, Top-K recall). Results show that the learning-based scheduler (pure-ML) consistently improves decision quality and reduces tail latency over a strong heuristic baseline; we discuss ε-post behavior and show how small, practical calibrations keep stability in line while preserving the core gains.
Published in: (CCIOT ’25) 2025 10th International Conference on Cloud Computing and Internet of Things
Accepted, DOI coming soon
ComFaaS: A Dynamic Approach to Edge and Cloud Computing with Function-as-a-Service
The rapid evolution of cloud and edge computing has redefined how data-intensive applications are developed and deployed, with Function-as-a-Service (FaaS) playing a pivotal role in this transformation. FaaS provides a serverless model where functions are executed in response to specific events, offering developers automatic scalability, high availability, and reduced infrastructure management overhead. The latest release of ComFaaS brings substantial improvements in flexibility, scalability, security, and ease of use. It introduces a dynamic architecture that enables FaaS applications to be added and executed at runtime, without the need to modify the core system, streamlining feature integration and enhancing scalability. ComFaaS also includes dynamic load balancing, which intelligently distributes workloads between edge and cloud environments, ensuring that tasks always benefit from the most efficient computing resources available. This hybrid approach allows edge and cloud computing to complement each other, resulting in optimized performance tailored to the specific needs of each application. The fully functional release of ComFaaS now delivers a powerful and adaptable solution for modern FaaS deployments, offering a secure and scalable platform for both cloud and edge environments.
Published in: (CCIOT ’24) 2024 9th International Conference on Cloud Computing and Internet of Things
ComFaaS Distributed: Edge Computing with Function-as-a-Service in Parallel Cloud Environments
Function-as-a-Service (FaaS) has emerged as a revolutionary service platform, abstracting the complexities of hardware, operating systems, and web hosting services. This allows developers to focus solely on implementing their service applications, making FaaS an ideal platform for the scalable manipulation of large data sets. Traditionally deployed on the cloud, FaaS now faces a new frontier: the network edge. Leveraging the edge offers several potential benefits, including reduced latency and improved resource utilization, making it a promising approach for efficient FaaS deployment. As the daily volume and complexity of data we handle continues to grow, adopting a parallel computing paradigm has become increasingly important to ensure fast and efficient execution of computational tasks. Addressing this need, ComFaaS Distributed embarks on a comprehensive comparison of the capabilities of parallelized edge and cloud environments for FaaS deployment. Utilizing benchmark programs meticulously crafted to simulate event-triggered scenarios, ComFaaS Distributed aims to provide valuable insights into the performance and potential of FaaS at the edge, paving the way for a future where parallel computing empowers the efficient and scalable processing of ever-growing data volumes.
Published in: (ICICT ’24) 2024 7th International Conference on Information and Computer Technologies
ComFaaS: Comparative Analysis of Edge Computing with Function-as-a-Service
This research paper presents a comprehensive comparison between cloud computing and edge computing in the context of function-as-a-service (FaaS) applications. The project, ComFaaS, aims to evaluate the performance and efficiency of these computing paradigms by conducting benchmark programs with edge-server connections simulating event-triggered executions. The experimental setup involves a cloud computing model where programs are selected from the cloud, and an edge computing model where programs are requested by the edge. The results of this study provide valuable insights into the suitability and effectiveness of cloud and edge computing for real-world applications utilizing FaaS.
Published in: (CCIOT ’23) 2023 8th International Conference on Cloud Computing and Internet of Things
