In Coordination with IEEE/CS Dataflow STC


Dataflow and Big Data Panel – from Academic Research to Industry Practice

Time: 14:00-18:00 on October 20th 2017
Guizhou Hall on the 3rd floor of the Empark Grand Hotel (Anhui)
(5558 Huizhou Avenue (Huizhou Dadao), Binhu New Area, Hefei, 230042, China)


Hong An University of Science and Technology of China
Kemal Ebcioglu Global Supercomputer Inc., USA
Dongrui Fan Institute of Computing Technology, China
Guang Gao University of Delaware, USA
Michael Gschwind     IBM, USA
Aaron Smith Microsoft, USA
Greg Wright Qualcomm, USA

Recently, we have witnessed rapid growing interests in future computing systems that are capable of efficient support high-performance computation for traditional HPC applications as well as new challenges of big data workload with advance in methods from the domain of AI, in particular computational machine learning.
Our preliminary system-wide hardware/software stack for HPC systems, including extreme scale (including exascale) systems, has been planned and in various stage of implementation being conducted across the world. However, these system design are built bottom up, largely based on current usage and plans of the applications teams. We have yet few applications that involve big data analytics and successfully exploring machine learning (including deep learning) methods with complex workflows at large scale. In other word, we have known gaps in these areas.
In searching of innovations in parallel computing architecture and system design to meet the challenges and filling such gaps, dataflow inspired program execution models (PXMs) are entering a period of fantastic recovery/growth, with now a seemingly very bright prospect offering a shining path toward future solutions.
The technical theme of this workshop have a system focus on the challenges and opportunities from relevant core technology and applications specifically the novel computation methods and algorithms from AI and machine learning in the context of big data analytic solutions.