Scientific Program

Conference Series Ltd invites all the participants across the globe to attend 6th Global Summit on Artificial Intelligence and Neural Networks Helsinki, Finland.

Day 1 :

Keynote Forum

Vincenzo Piuri

Università degli Studi di Milano, Italy

Keynote: Computational intelligence technologies for ambient intelligence
Conference Series Neural Networks 2018  International Conference Keynote Speaker Vincenzo Piuri photo
Biography:

Vincenzo Piuri is a Professor at the University of Milan, Italy and has been Visiting professor at the University of Texas at Austin, USA, and visiting researcher at George Mason University, USA. His research interests are: intelligent systems, signal and image processing, pattern analysis and recognition, machine learning and industrial applications. Original results have been published in 400+ papers. He is a Fellow Member of the IEEE, ACM Distinguished Scientist, and Senior Member of INNS. He has been IEEE Director, IEEE Vice President for Technical Activities (2015), and President of the IEEE Computational Intelligence Society. He is Editorin-Chief of the IEEE Systems Journal (2013-19). For his contributions to the advancement of theory and practice of computational intelligence in measurement systems and industrial applications, he received the IEEE Instrumentation and Measurement Society Technical Award (2002). He is an Honorary Professor at Obuda University (Hungary), Guangdong University of Petrochemical Technology (China), Muroran Institute of Technology (Japan), and Amity University (India).

Abstract:

Adaptability and advanced services for ambient intelligence require an intelligent technological support for understanding the current needs and the desires of users in the interactions with the environment for their daily use, as well as for understanding the current status of the environment also in complex situations. This infrastructure constitutes an essential base for smart living. Computational intelligence can provide additional flexible techniques for designing and implementing monitoring and control systems, which can be configured from behavioral examples or by mimicking approximate reasoning processes to achieve adaptable systems. This session will analyze the opportunities offered by computational intelligence to support the realization of adaptable operations and intelligent services for smart living in an ambient intelligent infrastructure.

Keynote Forum

Erwin E Sniedzins

Mount Knowledge Inc., Canada

Keynote: Automatic reduction of global and personal data overload

Time : 10:20-11:00

Conference Series Neural Networks 2018  International Conference Keynote Speaker Erwin E Sniedzins photo
Biography:

Erwin E Sniedzins is the President of Mount Knowledge Inc., Toronto, Canada. He has patented the Knowledge Generator™ (KG), which is an artificial intelligence application that takes any digitized textual content and automatically creates a MicroSelf-Reinforcement Learning and Personalize Gamification of this content into lessons, exercises and tests with scores and marks in dynamic real-time. He is the author and has published 12 books. He is also a Professor at Hebei University, Canada.

Abstract:

Educators, students, employers and employees are inundated with big data; they are seeking relief. AI provides the bridge between big data and personalized data using Natural Language Processing (NLP) and Genetic Algorithm Neural Networks (GANN). Artificial Intelligence is the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making and translation between languages. AI is transforming humanity’s cerebral evolution as a replacement of repetitive habitual motions and thoughts. In its evolutionary process humans developed their primary biological interfaces to interpret the data that they were receiving through their five senses, seeing, hearing, smelling, touching and tasting. In 1991 the World Wide Web (www.) was born and sensory assimilation of data felt the first angst of a new medium. 26 years later, more than 3.4 Exabyte of data is generated every day. This is comparable to a stack of CDs - from Earth to the Moon and back-each day. This onslaught of data is causing people a great deal of anxiety, stress and frustration. To overcome the pressure of knowledge acquisition, people should learn to handle big data and turn it into their personalized data.

Keynote Forum

Paolo Rocchi

University Luiss Guido Carli, Italy

Keynote: Intelligence’s origin and theoretical modeling

Time : 11:15-11:55

Conference Series Neural Networks 2018  International Conference Keynote Speaker Paolo Rocchi photo
Biography:

Paolo Rocchi has received a Degree in Physics from the Sapienza University of Rome in 1969 and was associated to the Institute of Physics as an Assistant Lecturer. The following year he joined IBM as a Docent and Researcher. He has carried out research and is still active in various fields of computing including software evolution, computer security, education, information theory, fundamentals of computer science, artificial intelligence and software engineering. He has written over one hundred and thirty works, including a dozen books. Upon retirement in 2010 he was recognized as an Emeritus Docent at IBM for his achievements in basic and applied research. He is also an Adjunct Professor at University LUISS Guido Carli. He is a Founder Member of the Artificial Intelligence Italian Association and a member of various scientific societies. He has received recognition even beyond the scientific community in the mass media.

Abstract:

Artificial Intelligence (AI) is usually defined as the science and engineering of making intelligent machines. AI experts do not confine themselves to practice and bring into question the very nature of intelligence. To win this intellectual and scientific challenge, AI experts should be backed by a solid theoretical base in particular Theoretical Computer Science (TCS) should furnish the notions necessary to explore the advanced properties of machines. Unfortunately this support does not seem to be adequate to the scopes. TCS illustrates every aspect of the computer system by means of formal theories although these theories are narrow, disjoined and abstract. How can AI experts answer profound questions about intelligence when the views of the computer and the brain prove to be fragmentary and insufficient? As an assumption how a unifying scientific theory begins with a simple concept and details all the phenomena occurring in the field through an inferential process. Step by step the theory justifies technical achievements and natural events. For example, mechanics is a unified body of knowledge that introduces the concept of speed. Then experts derive the notion of acceleration from it, in turn the notion of force, work, energy and so forth. A set of interconnected conclusions illustrates the entire domain and disentangles any conundrum through deductive reasoning. The structure of a theoretical construction in engineering and science has nothing to do with philosophy. Frame which kept forward, begins with the formal definition of the elementary piece of information which as assumed distinguishable and meaningful.

  • Artificial Intelligence | Artificial Neural Networks | Deep Learning | Bioinformatics | Computational Creativity | Parallel Processing | Self Organising Neural Networks
Location: Laine
Speaker
Biography:

Dean Armitage brings more than 25 years’ experience in technology strategy and innovation to futuristic conversations. With a passion to help solve large-scale social problems for children, he specializes in developing relevant technology-based solutions. Philosophically, he is invested in furthering conversation around one of the highly contemplated topics of our time; artificial intelligence.

Abstract:

There does not exist a singularity corollary that proposes all artificial life forms will be designed and commissioned with humanity’s best interests in mind. Humanity will most likely face the singularity of artificial life during the first half of this century. As the singularity approaches, humanity must not then assume that all artificial life forms would behave in an acceptable manner. More specifically, in a manner that does not jeopardize humanity itself. Establishing oversight on artificial life forms is necessary for the anticipated and not yet imagined variants of artificial life forms to come, to help ensure the continuity of humanity. The peoples of the world require assurance through oversight of artificial life forms as humanity approaches the singularity. Establishing a functioning governing body that best represents the interests of humanity while being nimble enough to adapt and action, when tasked, is a daunting but essential burden for the world. A required task would be to build an inclusive governing body that spans philosophical, geographical, religious, cultural, political, industrial and technological boundaries and acts on behalf of, and in the best interests of humanity. Individual human rights require reflection but lie in the shadow of this far greater need which is to contemplate safeguards for the human race. If a scientific study can propose and prove that humanity is not at risk by the dawn of the technological singularity through the creation artificial life forms, then such a governing body is less compulsory. If a risk to humanity exists, a governing body to protect the interests of humanity is imperative. If a risk exists, through the recognition of this new paradigm where artificially created life forms with cognitive abilities greatly out pacing that of their biological human life form, then action to govern is required.

Speaker
Biography:

Esteve Almirall is an Associate Professor at ESADE Business School, Ramon Llull University, Spain. He holds a PhD in Management Sciences (ESADE). Most of his career has been devoted to information technologies, especially in consulting, banking and finances where he worked for more than 20 years in executive and board level positions in IS, organization and marketing. As an entrepreneur, he actively participated and founded several start-ups in the field. Moreover, he has an MBA, a PDD from IESE a Diploma in Marketing from UC Berkeley and a GCPCL Diploma from Harvard B.S. He is passionate about the intersection between technology and innovation, very active in fields such as smart cities, innovation ecosystems, innovation in the public sector. He serves as Academic Director of the Master in Business Analytics and Director of the Center for Innovation in Cities.

Abstract:

On May 2005, Davenport Cohen and Jacobson published a working paper that a few months later appeared in HBR: Competing on analytics, one year later the book was released. This research was both chronicle and trigger of what has been called the analytics revolution or the data revolution. New positions have been invented such as data scientist and data engineers and a whole new cycle of hype around machine learning in particular and AI in general started. Analytics is presented now as the modern source of competitive advantage, is that so? Models allowed us to understand the world through the use of stylization and abstraction, that way we were able to precisely delineate ideas and complex systems that were later implemented in the real world. The Holy Grail has always been to be able to translate these models into fully automated systems with little need of human intervention and plastic enough to allow experimentation and rapid change. Advances in Cloud, AI and IT are making this dream real. Platforms such as Amazon, Uber, Facebook, Instagram or WhatsApp are examples of this, fully automated models. The implication are numerous, practically infinite scalability, zero marginal cost and no decreasing returns on scale together with the extensive use of network effects and feedback loops among others. Competition moved first from execution to analytics and now to model implementation. With it, the competences needed in companies to successfully compete in this new environment together with the meaning of R&D and experimentation changed dramatically. Innovation has always been a key driver of progress and growth, but now in environments with total plasticity and perfect execution, innovation is more relevant than ever. This new world is however full of challenges, code glitches, bandwagon effects, strange emergent behaviors are some of the unintended consequences of systems that act and decide on their own with non-human logics.

Leif Nordlund

NVIDIA Nordics, Sweden

Title: NVIDIA research: Pushing AI forwards
Speaker
Biography:

Leif Nordlund is a veteran in the HPC and datacenter business where he has been with companies such as Sun Microsystems, AMD and now NVIDIA. Between 2013 and 2015 he was at KTH University to develop the collaboration between industry and academic research. With a background in AI and Computer Science from Uppsala University he is currently in a position where he can combine the knowledge from these exciting research areas together with the research community in the Nordics.

Abstract:

Self-driving cars is currently a hot research topic. Deep learning revolutionized computer vision and is behind the rapid progress in the field of autonomous vehicles. NVIDIA is a key player in the area of self-driving cars and provides both hardware (NVIDIA DRIVE) and software platforms (DriveWorks) as support for the development of autonomous vehicles. NVIDIA GPUs also allow training deep neural networks significantly faster compared to any other means. Cars are expected to become computer on wheels as reaching full autonomy (e.g. Level-5) will require significant unprecedented amount of computing power in a vehicle. At NVIDIA Helsinki, our deep learning engineers – as part of our global R&D effort on autonomous vehicles – focus on obstacle perception for self-driving cars. R&D ranges from object detection, lane detection, free-space segmentation and depth estimation, based on multiple sensors such e.g. as cameras and Lidar.

Speaker
Biography:

Andrea Burgio has an expertise in solutions engineering built over sound academic bases and decades of hands-on international business experience. With formal studies in Statistics, Computer Science, Artificial Intelligence, Business Administration and International Finance between Italy, UK and France, and executive positions in Software, Engineering and Services corporations, he developed a holistic and humanistic touch in change and project management. Teaching in engineering and business schools or counseling for private ventures and government bodies, he constantly focuses on empowering students and professionals into responsible future-crafters. For his academic research contributions in Statistical Models for Decision Making (Italian Council of National Research, 1992) and the track record of industrial and public policy modeling for global institutions and corporations, Andrea has been appointed in 2017 to the Board of IMT (Institut Mines-Télécom), a French public institution dedicated to higher education, research and innovation with 14.000 graduate students, 1.300 researchers and 1.600 PhD.

Abstract:

Uncertainty has become a commodity and a must-have of any political agenda and corporate strategy. Nothing new since it has always been blamed or praised for anything from climate disasters to commodity prices volatility and from social unrest to industrial systems failures. Academics, policy makers and corporate executives have being rocking their brains for millennia to manage uncertainty and limit risk factors, while selling remedies bundled to the appropriate dose of fear. Fuzzy multiphase Markov chains and other stochastic models had their ups and downs and even made it more or less subtly into everyday must-haves like search engines and social augmentation tools; fortunes and misfortunes were built on modeling and engineering financial interactions and many non-PhD-bearing commoners now can even make a living out of sharing their empirical understanding of the world’s complexity. Have we collectively reached the limits of our analog representation of reality? Are we emotionally delaying the quantum leap? These and other less-compelling issues are the everyday unconscious lot of entrepreneurs and political actors and shape most of today’s human interactions, from multilateral trade agreements to the raise and fall of leaders and nations, in an astounding swirl of unprecedented interconnectedness. Science, especially the newly rebranded Data one, can contribute both to the exponential spread of the fear factors as to the empowerment of all the members of our multi-meshed societies, leading potentially to both ends of the cat-in-the-box situation. What we make of it all is a choice that belongs to no-one in particular and everyone at the same time. Dealing with and in Uncertainty can be a rewarding experience while bearing a fearful and anxious weight that keeps us alert, humble and curious. While tools and models do help in decision making, collective and individual intelligence shall remain at both ends of the process.