Publications by Muhsen Owaida
2020
Proceedings of the 2020 International Conference on Management of Data (SIGMOD'20), Portland, OR, USA, June 2020
Business Rule Management Systems (BRMSs) are widely used in industry for a variety of tasks. Their main advantage is to codify in a succinct and queryable manner vast amounts of constantly evolving logic. In BRMSs, rules are typically captured as facts (tuples) over a collection of criteria, and checking them involves querying the collection of rules to find the best match. In this paper, we focus on a real-world use case from the airline industry: determining the minimum connection time (MCT) between flights. The MCT module is part of the flight search engine, and captures the ever changing constraints at each airport that determine the time to allocate between an arriving and a departing flight for a connection to be feasible. We explore how to use hardware acceleration to (i) improve the performance of the MCT module (lower latency, higher throughput); and (ii) reduce the amount of computing resources needed. A key aspect of the solution is the transformation of a collection of rules into a Non-deterministic Finite state Automaton efficiently implemented on FPGA. Experiments performed on-premises and in the cloud show several orders of magnitude improvement over the existing solution, and the potential to reduce by 40% the number of machines needed for the flight search engine.
@inproceedings{abc, abstract = {Business Rule Management Systems (BRMSs) are widely used in industry for a variety of tasks. Their main advantage is to codify in a succinct and queryable manner vast amounts of constantly evolving logic. In BRMSs, rules are typically captured as facts (tuples) over a collection of criteria, and checking them involves querying the collection of rules to find the best match. In this paper, we focus on a real-world use case from the airline industry: determining the minimum connection time (MCT) between flights. The MCT module is part of the flight search engine, and captures the ever changing constraints at each airport that determine the time to allocate between an arriving and a departing flight for a connection to be feasible. We explore how to use hardware acceleration to (i) improve the performance of the MCT module (lower latency, higher throughput); and (ii) reduce the amount of computing resources needed. A key aspect of the solution is the transformation of a collection of rules into a Non-deterministic Finite state Automaton efficiently implemented on FPGA. Experiments performed on-premises and in the cloud show several orders of magnitude improvement over the existing solution, and the potential to reduce by 40\% the number of machines needed for the flight search engine.}, author = {Fabio Maschi and Muhsen Owaida and Gustavo Alonso and Matteo Casalino and Anthony Hock-Koon}, booktitle = {Proceedings of the 2020 International Conference on Management of Data (SIGMOD{\textquoteright}20)}, title = {Making Search Engines Faster by Lowering the Cost of Querying Business Rules Through FPGAs}, url = {https://doi.org/10.1145/3318464.3386133}, venue = {Portland, OR, USA}, year = {2020} }
2017
Proceedings of the 27th International Conference on Field Programmable Logic and Applications (FPL), Ghent, Belgium, October 2017
Decision tree ensembles are commonly used in a wide range of applications and becoming the de facto algorithm for decision tree based classifiers. Different trees in an ensemble can be processed in parallel during tree inference, making them a suitable use case for FPGAs. Large tree ensembles, however, require careful mapping of trees to on-chip memory and management of memory accesses. As a result, existing FPGA solutions suffer from the inability to scale beyond tens of trees and lack the flexibility to support different tree ensembles. In this paper we present an FPGA tree ensemble classifier together with a software driver to efficiently manage the FPGA's memory resources. The classifier architecture efficiently utilizes the FPGA's resources to fit half a million tree nodes in on-chip memory, delivering up to 20× speedup over a 10-threaded CPU implementation when fully processing the tree ensemble on the FPGA. It can also combine the CPU and FPGA to scale to tree ensembles that do not fit in on-chip memory, achieving up to an order of magnitude speedup compared to a pure CPU implementation. In addition, the classifier architecture can be programmed at runtime to process varying tree ensemble sizes.
@inproceedings{abc, abstract = {Decision tree ensembles are commonly used in a wide range of applications and becoming the de facto algorithm for decision tree based classifiers. Different trees in an ensemble can be processed in parallel during tree inference, making them a suitable use case for FPGAs. Large tree ensembles, however, require careful mapping of trees to on-chip memory and management of memory accesses. As a result, existing FPGA solutions suffer from the inability to scale beyond tens of trees and lack the flexibility to support different tree ensembles. In this paper we present an FPGA tree ensemble classifier together with a software driver to efficiently manage the FPGA{\textquoteright}s memory resources. The classifier architecture efficiently utilizes the FPGA{\textquoteright}s resources to fit half a million tree nodes in on-chip memory, delivering up to 20{\texttimes} speedup over a 10-threaded CPU implementation when fully processing the tree ensemble on the FPGA. It can also combine the CPU and FPGA to scale to tree ensembles that do not fit in on-chip memory, achieving up to an order of magnitude speedup compared to a pure CPU implementation. In addition, the classifier architecture can be programmed at runtime to process varying tree ensemble sizes.}, author = {Muhsen Owaida and Hantian Zhang and Ce Zhang and Gustavo Alonso}, booktitle = {Proceedings of the 27th International Conference on Field Programmable Logic and Applications (FPL)}, title = {Scalable inference of decision tree ensembles: Flexible design for CPU-FPGA platforms}, venue = {Ghent, Belgium}, year = {2017} }
Proceedings of the 2017 ACM International Conference on Management of Data, SIGMOD Conference 2017, Chicago, IL, USA, May 2017
Taking advantage of recently released hybrid multicore architectures, such as the Intel Xeon+FPGA machine, where the FPGA has coherent access to the main memory through the QPI bus, we explore the benefits of specializing operators to hardware. We focus on two commonly used SQL operators for strings: LIKE, and REGEXP_LIKE, and provide a novel and efficient implementation of these operators in reconfigurable hardware. We integrate the hardware accelerator into MonetDB, a main-memory column store, and demonstrate a significant improvement in response time and throughput. Our Hardware User Defined Function (HUDF) can speed up complex pattern matching by an order of magnitude in comparison to the database running on a 10-core CPU. The insights gained from integrating hardware based string operators into MonetDB should also be useful for future designs combining hardware specialization and databases.
@inproceedings{abc, abstract = {Taking advantage of recently released hybrid multicore architectures, such as the Intel Xeon+FPGA machine, where the FPGA has coherent access to the main memory through the QPI bus, we explore the benefits of specializing operators to hardware. We focus on two commonly used SQL operators for strings: LIKE, and REGEXP_LIKE, and provide a novel and efficient implementation of these operators in reconfigurable hardware. We integrate the hardware accelerator into MonetDB, a main-memory column store, and demonstrate a significant improvement in response time and throughput. Our Hardware User Defined Function (HUDF) can speed up complex pattern matching by an order of magnitude in comparison to the database running on a 10-core CPU. The insights gained from integrating hardware based string operators into MonetDB should also be useful for future designs combining hardware specialization and databases.}, author = {David Sidler and Zsolt Istv{\'a}n and Muhsen Owaida and Gustavo Alonso}, booktitle = {Proceedings of the 2017 ACM International Conference on Management of Data, SIGMOD Conference 2017}, title = {Accelerating Pattern Matching Queries in Hybrid CPU-FPGA Architectures.}, url = {http://doi.acm.org/10.1145/3035918.3035954}, venue = {Chicago, IL, USA}, year = {2017} }
Proceedings of the 2017 ACM International Conference on Management of Data, SIGMOD Conference 2017, Chicago, IL, USA, May 2017
@inproceedings{abc, author = {David Sidler and Zsolt Istv{\'a}n and Muhsen Owaida and Kaan Kara and Gustavo Alonso}, booktitle = {Proceedings of the 2017 ACM International Conference on Management of Data, SIGMOD Conference 2017, Chicago, IL, USA}, title = {doppioDB: A Hardware Accelerated Database.}, url = {http://doi.acm.org/10.1145/3035918.3058746}, year = {2017} }
25th IEEE Annual International Symposium on Field-Programmable Custom Computing Machines, FCCM 2017, Napa, CA, USA, April 2017
@inproceedings{abc, author = {Muhsen Owaida and David Sidler and Kaan Kara and Gustavo Alonso}, booktitle = {25th IEEE Annual International Symposium on Field-Programmable Custom Computing Machines, FCCM 2017, Napa, CA, USA}, title = {Centaur: A Framework for Hybrid CPU-FPGA Databases.}, url = {https://doi.org/10.1109/FCCM.2017.37}, year = {2017} }