Scientific Program

Conference Series Ltd invites all the participants across the globe to attend 6th Global Summit on Artificial Intelligence and Neural Networks Helsinki, Finland.

Day 1 :

Keynote Forum

Vincenzo Piuri

Università degli Studi di Milano, Italy

Keynote: Computational intelligence technologies for ambient intelligence
Conference Series Neural Networks 2018  International Conference Keynote Speaker Vincenzo Piuri photo
Biography:

Vincenzo Piuri is a Professor at the University of Milan, Italy and has been Visiting professor at the University of Texas at Austin, USA, and visiting researcher at George Mason University, USA. His research interests are: intelligent systems, signal and image processing, pattern analysis and recognition, machine learning and industrial applications. Original results have been published in 400+ papers. He is a Fellow Member of the IEEE, ACM Distinguished Scientist, and Senior Member of INNS. He has been IEEE Director, IEEE Vice President for Technical Activities (2015), and President of the IEEE Computational Intelligence Society. He is Editorin-Chief of the IEEE Systems Journal (2013-19). For his contributions to the advancement of theory and practice of computational intelligence in measurement systems and industrial applications, he received the IEEE Instrumentation and Measurement Society Technical Award (2002). He is an Honorary Professor at Obuda University (Hungary), Guangdong University of Petrochemical Technology (China), Muroran Institute of Technology (Japan), and Amity University (India).

Abstract:

Adaptability and advanced services for ambient intelligence require an intelligent technological support for understanding the current needs and the desires of users in the interactions with the environment for their daily use, as well as for understanding the current status of the environment also in complex situations. This infrastructure constitutes an essential base for smart living. Computational intelligence can provide additional flexible techniques for designing and implementing monitoring and control systems, which can be configured from behavioral examples or by mimicking approximate reasoning processes to achieve adaptable systems. This session will analyze the opportunities offered by computational intelligence to support the realization of adaptable operations and intelligent services for smart living in an ambient intelligent infrastructure.

Keynote Forum

Erwin E Sniedzins

Mount Knowledge Inc., Canada

Keynote: Automatic reduction of global and personal data overload

Time : 10:20-11:00

Conference Series Neural Networks 2018  International Conference Keynote Speaker Erwin E Sniedzins photo
Biography:

Erwin E Sniedzins is the President of Mount Knowledge Inc., Toronto, Canada. He has patented the Knowledge Generator™ (KG), which is an artificial intelligence application that takes any digitized textual content and automatically creates a MicroSelf-Reinforcement Learning and Personalize Gamification of this content into lessons, exercises and tests with scores and marks in dynamic real-time. He is the author and has published 12 books. He is also a Professor at Hebei University, Canada.

Abstract:

Educators, students, employers and employees are inundated with big data; they are seeking relief. AI provides the bridge between big data and personalized data using Natural Language Processing (NLP) and Genetic Algorithm Neural Networks (GANN). Artificial Intelligence is the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making and translation between languages. AI is transforming humanity’s cerebral evolution as a replacement of repetitive habitual motions and thoughts. In its evolutionary process humans developed their primary biological interfaces to interpret the data that they were receiving through their five senses, seeing, hearing, smelling, touching and tasting. In 1991 the World Wide Web (www.) was born and sensory assimilation of data felt the first angst of a new medium. 26 years later, more than 3.4 Exabyte of data is generated every day. This is comparable to a stack of CDs - from Earth to the Moon and back-each day. This onslaught of data is causing people a great deal of anxiety, stress and frustration. To overcome the pressure of knowledge acquisition, people should learn to handle big data and turn it into their personalized data.

Keynote Forum

Paolo Rocchi

University Luiss Guido Carli, Italy

Keynote: Intelligence’s origin and theoretical modeling

Time : 11:15-11:55

Conference Series Neural Networks 2018  International Conference Keynote Speaker Paolo Rocchi photo
Biography:

Paolo Rocchi has received a Degree in Physics from the Sapienza University of Rome in 1969 and was associated to the Institute of Physics as an Assistant Lecturer. The following year he joined IBM as a Docent and Researcher. He has carried out research and is still active in various fields of computing including software evolution, computer security, education, information theory, fundamentals of computer science, artificial intelligence and software engineering. He has written over one hundred and thirty works, including a dozen books. Upon retirement in 2010 he was recognized as an Emeritus Docent at IBM for his achievements in basic and applied research. He is also an Adjunct Professor at University LUISS Guido Carli. He is a Founder Member of the Artificial Intelligence Italian Association and a member of various scientific societies. He has received recognition even beyond the scientific community in the mass media.

Abstract:

Artificial Intelligence (AI) is usually defined as the science and engineering of making intelligent machines. AI experts do not confine themselves to practice and bring into question the very nature of intelligence. To win this intellectual and scientific challenge, AI experts should be backed by a solid theoretical base in particular Theoretical Computer Science (TCS) should furnish the notions necessary to explore the advanced properties of machines. Unfortunately this support does not seem to be adequate to the scopes. TCS illustrates every aspect of the computer system by means of formal theories although these theories are narrow, disjoined and abstract. How can AI experts answer profound questions about intelligence when the views of the computer and the brain prove to be fragmentary and insufficient? As an assumption how a unifying scientific theory begins with a simple concept and details all the phenomena occurring in the field through an inferential process. Step by step the theory justifies technical achievements and natural events. For example, mechanics is a unified body of knowledge that introduces the concept of speed. Then experts derive the notion of acceleration from it, in turn the notion of force, work, energy and so forth. A set of interconnected conclusions illustrates the entire domain and disentangles any conundrum through deductive reasoning. The structure of a theoretical construction in engineering and science has nothing to do with philosophy. Frame which kept forward, begins with the formal definition of the elementary piece of information which as assumed distinguishable and meaningful.

  • Artificial Intelligence | Artificial Neural Networks | Deep Learning | Bioinformatics | Computational Creativity | Parallel Processing | Self Organising Neural Networks
Location: Laine
Speaker
Biography:

Dean Armitage brings more than 25 years’ experience in technology strategy and innovation to futuristic conversations. With a passion to help solve large-scale social problems for children, he specializes in developing relevant technology-based solutions. Philosophically, he is invested in furthering conversation around one of the highly contemplated topics of our time; artificial intelligence.

Abstract:

There does not exist a singularity corollary that proposes all artificial life forms will be designed and commissioned with humanity’s best interests in mind. Humanity will most likely face the singularity of artificial life during the first half of this century. As the singularity approaches, humanity must not then assume that all artificial life forms would behave in an acceptable manner. More specifically, in a manner that does not jeopardize humanity itself. Establishing oversight on artificial life forms is necessary for the anticipated and not yet imagined variants of artificial life forms to come, to help ensure the continuity of humanity. The peoples of the world require assurance through oversight of artificial life forms as humanity approaches the singularity. Establishing a functioning governing body that best represents the interests of humanity while being nimble enough to adapt and action, when tasked, is a daunting but essential burden for the world. A required task would be to build an inclusive governing body that spans philosophical, geographical, religious, cultural, political, industrial and technological boundaries and acts on behalf of, and in the best interests of humanity. Individual human rights require reflection but lie in the shadow of this far greater need which is to contemplate safeguards for the human race. If a scientific study can propose and prove that humanity is not at risk by the dawn of the technological singularity through the creation artificial life forms, then such a governing body is less compulsory. If a risk to humanity exists, a governing body to protect the interests of humanity is imperative. If a risk exists, through the recognition of this new paradigm where artificially created life forms with cognitive abilities greatly out pacing that of their biological human life form, then action to govern is required.

Speaker
Biography:

Esteve Almirall is an Associate Professor at ESADE Business School, Ramon Llull University, Spain. He holds a PhD in Management Sciences (ESADE). Most of his career has been devoted to information technologies, especially in consulting, banking and finances where he worked for more than 20 years in executive and board level positions in IS, organization and marketing. As an entrepreneur, he actively participated and founded several start-ups in the field. Moreover, he has an MBA, a PDD from IESE a Diploma in Marketing from UC Berkeley and a GCPCL Diploma from Harvard B.S. He is passionate about the intersection between technology and innovation, very active in fields such as smart cities, innovation ecosystems, innovation in the public sector. He serves as Academic Director of the Master in Business Analytics and Director of the Center for Innovation in Cities.

Abstract:

On May 2005, Davenport Cohen and Jacobson published a working paper that a few months later appeared in HBR: Competing on analytics, one year later the book was released. This research was both chronicle and trigger of what has been called the analytics revolution or the data revolution. New positions have been invented such as data scientist and data engineers and a whole new cycle of hype around machine learning in particular and AI in general started. Analytics is presented now as the modern source of competitive advantage, is that so? Models allowed us to understand the world through the use of stylization and abstraction, that way we were able to precisely delineate ideas and complex systems that were later implemented in the real world. The Holy Grail has always been to be able to translate these models into fully automated systems with little need of human intervention and plastic enough to allow experimentation and rapid change. Advances in Cloud, AI and IT are making this dream real. Platforms such as Amazon, Uber, Facebook, Instagram or WhatsApp are examples of this, fully automated models. The implication are numerous, practically infinite scalability, zero marginal cost and no decreasing returns on scale together with the extensive use of network effects and feedback loops among others. Competition moved first from execution to analytics and now to model implementation. With it, the competences needed in companies to successfully compete in this new environment together with the meaning of R&D and experimentation changed dramatically. Innovation has always been a key driver of progress and growth, but now in environments with total plasticity and perfect execution, innovation is more relevant than ever. This new world is however full of challenges, code glitches, bandwagon effects, strange emergent behaviors are some of the unintended consequences of systems that act and decide on their own with non-human logics.

Leif Nordlund

NVIDIA Nordics, Sweden

Title: NVIDIA research: Pushing AI forwards
Speaker
Biography:

Leif Nordlund is a veteran in the HPC and datacenter business where he has been with companies such as Sun Microsystems, AMD and now NVIDIA. Between 2013 and 2015 he was at KTH University to develop the collaboration between industry and academic research. With a background in AI and Computer Science from Uppsala University he is currently in a position where he can combine the knowledge from these exciting research areas together with the research community in the Nordics.

Abstract:

Self-driving cars is currently a hot research topic. Deep learning revolutionized computer vision and is behind the rapid progress in the field of autonomous vehicles. NVIDIA is a key player in the area of self-driving cars and provides both hardware (NVIDIA DRIVE) and software platforms (DriveWorks) as support for the development of autonomous vehicles. NVIDIA GPUs also allow training deep neural networks significantly faster compared to any other means. Cars are expected to become computer on wheels as reaching full autonomy (e.g. Level-5) will require significant unprecedented amount of computing power in a vehicle. At NVIDIA Helsinki, our deep learning engineers – as part of our global R&D effort on autonomous vehicles – focus on obstacle perception for self-driving cars. R&D ranges from object detection, lane detection, free-space segmentation and depth estimation, based on multiple sensors such e.g. as cameras and Lidar.

Speaker
Biography:

Andrea Burgio has an expertise in solutions engineering built over sound academic bases and decades of hands-on international business experience. With formal studies in Statistics, Computer Science, Artificial Intelligence, Business Administration and International Finance between Italy, UK and France, and executive positions in Software, Engineering and Services corporations, he developed a holistic and humanistic touch in change and project management. Teaching in engineering and business schools or counseling for private ventures and government bodies, he constantly focuses on empowering students and professionals into responsible future-crafters. For his academic research contributions in Statistical Models for Decision Making (Italian Council of National Research, 1992) and the track record of industrial and public policy modeling for global institutions and corporations, Andrea has been appointed in 2017 to the Board of IMT (Institut Mines-Télécom), a French public institution dedicated to higher education, research and innovation with 14.000 graduate students, 1.300 researchers and 1.600 PhD.

Abstract:

Uncertainty has become a commodity and a must-have of any political agenda and corporate strategy. Nothing new since it has always been blamed or praised for anything from climate disasters to commodity prices volatility and from social unrest to industrial systems failures. Academics, policy makers and corporate executives have being rocking their brains for millennia to manage uncertainty and limit risk factors, while selling remedies bundled to the appropriate dose of fear. Fuzzy multiphase Markov chains and other stochastic models had their ups and downs and even made it more or less subtly into everyday must-haves like search engines and social augmentation tools; fortunes and misfortunes were built on modeling and engineering financial interactions and many non-PhD-bearing commoners now can even make a living out of sharing their empirical understanding of the world’s complexity. Have we collectively reached the limits of our analog representation of reality? Are we emotionally delaying the quantum leap? These and other less-compelling issues are the everyday unconscious lot of entrepreneurs and political actors and shape most of today’s human interactions, from multilateral trade agreements to the raise and fall of leaders and nations, in an astounding swirl of unprecedented interconnectedness. Science, especially the newly rebranded Data one, can contribute both to the exponential spread of the fear factors as to the empowerment of all the members of our multi-meshed societies, leading potentially to both ends of the cat-in-the-box situation. What we make of it all is a choice that belongs to no-one in particular and everyone at the same time. Dealing with and in Uncertainty can be a rewarding experience while bearing a fearful and anxious weight that keeps us alert, humble and curious. While tools and models do help in decision making, collective and individual intelligence shall remain at both ends of the process.

  • Cognitive Computing | Natural Language Processing | Percetrons | Backpropagation
Location: Laine
Speaker
Biography:

Educators, students, employers and employees are inundated with big data. They are seeking relief. AI provides the bridge between big data and personalized data using NLP and GANN. In recent years Natural Language Processing (NLP) and Genetic Algorithm Neural Networks (GANN) have entered to add, “Data into Knowledge” (DiK) solutions. Research with NLP and GANN has enabled tools to be developed that selectively filters big data and combine this data into micro self- reinforcement and personalized gamification of any DiK in Dynamic real-time. Is the combination of GA, NLP, MSRL and dynamic Gamification that has enabled people to experience relieve in their quest to turn DiK 32% better, faster and easier and with more confidence over traditional learning methods.

Abstract:

Erwin E Sniedzins is the President of Mount Knowledge Inc, Toronto, Canada. The company is a Global Leader in AI, neural networks, automatic gamification of any textual data and micro reinforcement learning. He has patented the Knowledge Generator™ (KG), which is an artificial intelligence application that takes any digitized textual content and automatically creates a MicroSelf-Reinforcement Learning and Personalize Gamification of this content into lessons, exercises and tests with scores and marks in Dynamic real-time. The KG technology enables people to turn Data into Knowledge (DiK) 32% better, faster and easier with more confidence and fun. No teacher or course designer is required. He is the author and has published 12 books. He is also a Professor at Hebei University, Canada.

Speaker
Biography:

Mark Strefford is the Founder of Timelaps AI Limited, UK. He is an experienced Technology Leader in delivering innovative technologies to leading national, international private and public sector organizations.

 

Abstract:

Introducing AI and ML technologies into the enterprise, as with any IT project, has a number of challenges. A number of these are often comparable to the delivery of a standard IT project, such as ensuring there is a clear business outcome, a defined scope for delivery, stakeholder engagement, a suitably qualified team and the ability to measure progress as the project continues. However, there are subtle differences that if not addressed or the business and technology stakeholders are not made aware, can increase the risk, cause issues during delivery and ultimately and potentially cause the project to fail. With a muchhyped technology that is immature in its wide-scale enterprise adoption such as AI, a high-profile failure can set an organization back in its appetite to adopt this technology again in the near to medium term. The aim is to highlight some of the issues that are likely to arise, from understanding the organizations strategy for AI, structuring a project delivery approach, engagement with users, ensuring that you have the right mix of resources to deliver, how to integrate with wider systems, security, privacy and data controller concerns and subsequently some proven approaches to take in addressing these.

Speaker
Biography:

Gilberto Batres-Estrada has received MSc in Theoretical Physics and MSc in Engineering with specialization in Applied Mathematics and Statistics. He works as a Consultant Data Scientist and his domain of expertise is deep learning. He also conducts independent research in deep learning and reinforcement learning in finance with researchers at Columbia University, NY, USA. He has also made contributions to the book Big Data and Machine Learning in Quantitative Investment, with focus on long short term memory networks. Previously he has worked in finance as Quantitative Analyst and also worked building trading algorithms for a hedge fund.

Abstract:

A Recurrent Neural Network (RNN) is a type of neural network suited for sequential data. Its recurrent connections from one time step to the next introduces a depth in time, which in theory, is capable of learning long term dependencies. There are however two serious issues with vanilla RNNs, the first is the problem of exploding gradients, where the gradient grows without bound leading to instabilities in the system. Growing gradients can in part be alleviated by a technique known as clipping gradients. The second problem is that of diminishing gradients, which has no solution for the vanilla RNN, but for some special cases. The Long Short Term Memory Network (LSTM) is a type of RNN designed to solve both of these issues, experienced by RNNs during the training phase. The solution lies in replacing the regular neural units in an RNN with units called memory cells. An LSTM memory cell is composed of four gates controlling the flow of the gradients which dynamically respond to the input by changing the internal state according to the long term interactions in the sequence. The LSTM has been shown to be very successful at solving problems in speech recognition, unconstrained handwritten recognition, machine translation, image captioning, parsing and lately in prediction of stock prices and time series prediction. To combat over-fitting, techniques such as dropout have become standard when training these models. We start by presenting the RNN’s architecture and continue with the LSTM and its mathematical formulation. This study also focuses on the technical aspects of training, regularization and performance.

Speaker
Biography:

Fire related disasters are the most common type of emergency situation which requires thorough analysis of the situation required for a quick and precise response. The damage due to fire is quite substantial. The number of deaths due to fire related emergencies is around 18,000 per year. Apart from this, the economic loss is quite considerable. Forest fires in Northern California last year resulted in $65 billion in property damages. Emergency response is the most critical part of disaster management which deals with the organization, allocation and deployment of resources in the most efficient manner with the sole purpose of protecting public health and minimizing damage caused by a natural or human-made disaster and brings the situation under control. We intend to develop and apply novel AI algorithms, strategies and methodologies to fire emergency situations and outperform previous state-of-the-art models to achieve higher performance, precision and efficiency. The main aim is to develop a system that can detect fire as soon as possible, analyze it and the surroundings thoroughly and finally plan the evacuation of all people which should result in as little damage as possible; all this would be done in less than 5 seconds. Such an accurate, versatile and fast system requires many different sub-fields of deep learning to come together. It will employ image recognition and classification with CNNs, Object detection and Image segmentation with R-CNNs and Deep Reinforcement Learning with DQNs and Policy Gradients. The first step of building fire management system was successfully accomplished. It would be quite malleable and can be easily modified for other applications. Even though it will be large and complex, but it’s tractability won’t be compromised because it’s divided into several modules. It will be the first end-to-end trainable Deep Neural Network emergency management system.

Abstract:

Speaker
Biography:

Patricio Julian Gerpe is Co-founder Argentine AI Community (IA AR), an award-winning serial technological entrepreneur with more than 7 years in the startup scene. He is the main developer at Enpov, start-up awarded to top 20 out of +30,000 projects at UN-backed Hult Prize competition. He is an advocate for open AI and collaborative data science who had been interviewed by PhD podcasts and think tank’s discussions in top-tier multinationals. He is also TEDx speaker, content creator, startup mentor, full-stack developer specialized in process automation and intelligent agents, already featured on TV and mass media.

Abstract:

Statement of the Problem: Female rural merchants in Ethiopia lose at least 30% of their daily income due to inaccessibility of current on-demand transportation infrastructure. These losses are driven by (A) Time lost on waiting for transportation, (B) Constraints in the capacity they can carry, (C) Lack of access to most crowded markets. The purpose of this study is to assess the responsiveness of income for female rural merchants to the introduction of an agent-based computer program that connects them with the supply of on-demand transportation and finds an optimal fare. Specifically, the program is aimed to simulate the decision-making process behind the experience of going to markets to sell goods in Ethiopian markets. Methodology & Theoretical Orientation: Firstly, an ethnographic study was utilized during participant observation, in-depth interviews and focus groups to assess the current situation of the problem. Secondly, qualitative information from these interviews such as pain points and gain points of the experience to look for on-demand transportation was illustrated in a value proposition map, then scored, ranked and quantified based on interviews results to represent utility. Geolocalization data points were taken as inputs as well. The experience is tested twice with and without the application. Findings: The software application on average seemed to increase income for rural merchants in a range of 30%-50%. However, it is still premature to determine how likely is the intelligent-agent to affects merchants’ income. A much bigger subject sample and arrange of data points are still needed to determine the effectiveness of this application. Conclusion & Significance: The character of this research is preliminary
and necessary to further research on this topic. Part of this work will be open sourced to the scientific community, aiming to encourage the exploration of new applications of agent-based software in assisting high-risk communities to make smarter decisions.

Biography:

John Kontos has a PhD from Birmingham University in Computers. He has published over 100 papers, among other topics, in Natural Language Processing, Deductive Question Answering, Explanation Systems and Machine Learning. He has headed the design and construction of the first Greek digital general purpose computer. He has served as Full Professor of AI at the Economic University of Athens and at the University of Athens.  

 

Abstract:

The poster is proposed to conceptualize different Machine Learning Methods, e.g. the ANN-based ones, as equivalent to different String Matching methods for retrieving from a table of examples. The aim of the proposal is to clarify what Machine Learning methods can be reduced to in order to avoid the anthropomorphic concept of Machine Learning. Such a reduction may also help in converting “learned” systems to “explainable” systems that enhance trust. The table below shows a correspondence between String Matching methods and Machine Learning methods.

String Matching method                              Machine Learning method

1.     Sequential Serial

2.     Non-sequential Serial                                      (Decision Trees (DT))

3.     Weighted Total Parallel                                              (Single ANN)

4.     Weighted Partial Parallel                                           (Multiple ANNs)

5.     Hierarchical Weighted Partial Parallel                    (“Deep” ANNs)

First we notice that Method No.2 differs from No.1 in the different order of symbol by symbol matching procedure. This change of order by computing a DT (e.g. ID3) is meant to reduce the computing effort necessary for matching between a set of examples and a string under test. Since methods Nos 3 to 5 according to recent literature may be reduced to DTs we may consider them as variants of serial string matching. Future work must analyze the “unseen” example cases.