Scientific Program

Conference Series Ltd invites all the participants across the globe to attend 6th Global Summit on Artificial Intelligence and Neural Networks Helsinki, Finland.

Day 2 :

Keynote Forum

Boris Stilman

University of Colorado & STILMAN Advanced Strategies, USA

Keynote: Introduction to the primary language: Discovering the algorithm of discovery

Time : 09:30-10:10

Conference Series Neural Networks 2018  International Conference Keynote Speaker Boris Stilman photo
Biography:

Boris Stilman is a Professor of Computer Science at the University of Colorado Denver, USA and the Chairman and CEO of STILMAN Advanced Strategies, LLC, USA. In 1972-1988, in Moscow, USSR, he was involved in the advanced research project PIONEER led by a former World Chess Champion Professor Mikhail Botvinnik.

Abstract:

This study emphasizes recent results of my research related to the primary language of the human brain and its applications. Our hypothesis is that the primary language is the language of visual streams, i.e., it is based on mental dynamic images (movies), visual streams, so that those streams drive reasoning, reading and writing, translation and serve as a different form, the foundation, for all the sciences (following von Neumann suggestion). Note that, according to this hypothesis, the primary language is not a language in mathematical sense. This study is interested in revealing the detailed nature of the primary language by investigating ancient algorithms, crucial for development of humanity. It is likely that such ancient algorithms are powered by the primary language directly and thus, utilize symbolic reasoning on a limited scope. Our contention is that the hypothetical Algorithm of Discovery (AD) must be one of such algorithms. Yet another such algorithm is Linguistic Geometry (LG), which is a type of game theory, that generates detailed intelligent courses of action (predict behavior) of all sides in a conflict with accuracy exceeding the level of human experts. In a way, we use a Rosetta Stone approach by investigating three items together, the primary language, LG and the AD.

Keynote Forum

Steve Wells

Fast Future Publishing Ltd, UK

Keynote: Going beyond genuine stupidity to ensure ai serves humanity

Time : 10:10-10:50

Conference Series Neural Networks 2018  International Conference Keynote Speaker Steve Wells photo
Biography:

Steve Wells is a global futurist, experienced speaker, strategist and facilitator. He helps delegates and clients understand the key future factors driving innovation, growth and disruptive change, highlighting the new thinking and business models being enabled by exponential technologies such as AI, immersive technologies like augmented and virtual reality and hyper-connectivity. He has a professional background as an independent Strategy Consultant and Researcher and as a Strategy Development Manager in Pfizer’s UK business where he worked across C-suite and Senior Manager Level to lead and facilitate the company’s strategic planning process.

Abstract:

Almost every new technology arrives with a fan base claiming it will revolutionize life on Earth. For some, AI is just one more in a long list of over-hyped technologies that won’t live up to its promise. At the other end of the spectrum are those who believe this could literally be the game changing invention that reshapes our world. They argue that humanity has a directional choice: Do we want the transformation to enable an unleashing of human potential, or lead us towards the effective end of life on the planet? We believe AI is like no technology that has gone before, but we are far too early in its evolution to know how far and how rapidly this Fourth Industrial Revolution powered by smart machines might spread. What is clear is that, in real terms, although we are only in the very early stages of the AI revolution, we can already see that it has the potential to disrupt every aspect of our private and public lives. Indeed, AI is already having a dramatic impact across everything from medical diagnosis and construction to government decision making, financial services, and even dating sites. As the pace of development accelerates and AI’s potential becomes a little clearer, so the warnings grow ever-stronger about the threat to jobs and privacy and the risks of humanity becoming enslaved by the machine. In a fast-changing world with a rapidly changing reality, it is no surprise that as individuals, businesses and even governments, we are often only planning for the next month, quarter, or year. However, the far-reaching and seemingly limitless potential applications, impacts and implications of AI demand that we look more deeply into the opportunities and challenges that an AI-enabled future might bring. So the question is how can we ensure that we go beyond genuine stupidity in preparing for artificial intelligence?

Keynote Forum

Mihaela Ulieru

IMPACT Institute for the Digital Economy

Keynote: Personal analytic engines through blockchain: disrupting big data

Time : 11:00-11:40

Conference Series Neural Networks 2018  International Conference Keynote Speaker Mihaela Ulieru photo
Biography:

Mihaela Ulieru is an expert in distributed intelligent systems. For her scientific work which has positively impacted citizens in emerging and advanced economies including Asia Pac, North America and Europe she was awarded, among many others, the “Industrial Research Chair in Intelligent Systems” and the “Canada Research Chair in e-Society” and was appointed to numerous boards among which the Science Councils of Singapore, Canada and European Commission and to the Global Agenda Council of the World Economic Forum. She founded two research labs leading several international large-scale projects, among which: Organic Governance, Adaptive Risk Management, Self-organizing Security, Living Technologies and Emulating the Mind.

Abstract:

Recently Distributed Ledger Technologies (Blockchain) entered this space threatening to disrupt the big data industry through innovative initiatives such as Endor.com, Numerai.io or Matrix.io. These new kids on the (AI) block promise to democratize analytics through Blockchain and become, as in the case of Endor - the “Google of predictive analytics.” As opposed to Google however, which is a centralized platform, Endor promises a decentralized protocol called “Endor.coin” that enables any member of the community to contribute to its improvement and be rewarded in tokens, which in turn can be used to answer questions from an ever-enhancing prediction engine. A protocol enables something. Just like TCP/IP is the protocol that enables peer-to-peer exchange of files, and Blockchain is the protocol that enables the peer-to peer exchange of assets, Endor is proposing to be the protocol for the Internet of Predictions, enabling anyone to improve it by plugging in new prediction engines! Just like anyone can build new applications on Ethereum, anyone can use Endor to create new businesses, such as new blockchain enabled insurance models, predictive e-Health and personal medicine, optimized services for small businesses seeking to better use existing advertisement services, innovative marketing models on blockchain, and so on. The key to all of this will be building a vibrant community of users and contributors. On centralized platforms, community happens through “if you build it [the platform], they will come.” On blockchain, the motto is “if they come, they will build it”, and bringing contributors is done by giving them tokens. We typically think of such tokens as securities to be liquidated, but Endor is promising a new method: the tokens (EDR) will not be liquidated but rather act as a utility to access the overlaying services, such as data or predictive models. Endor plans to build its capabilities by committing 60% of all tokens to contributors: entrepreneurs (called “catalysts”) that will build businesses using the platform will receive 25%, researchers that will build the algorithms used in Endor’s library will receive 15%, and strategic partners such as Bancor and ORBS that will maximize distribution of the Endor coin will receive 20% of the total amount of tokens. A crucial factor to the success of such decentralized endeavors is an appropriate allocation of tokens that ensures the right mix of stakeholders in the ecosystem. The creation of analytics engines on blockchain is a revolution in the way we think about and leverage big data. Using blockchain, a company like Endor can address many of the ills plaguing centralized data platforms, by ensuring: (1) data sovereignty, letting users choose to provide their data in exchange for tokens; and (2) accountability, giving full transparency to the user as to what data and what engine was used to make a prediction.

  • Artificial Intelligence | Artificial Neural Networks | Deep Learning | Bioinformatics | Computational Creativity | Parallel Processing | Self Organising Neural Networks
Location: Laine
Speaker
Biography:

Dean Armitage brings more than 25 years’ experience in technology strategy and innovation to futuristic conversations. With a passion to help solve large-scale social problems for children, he specializes in developing relevant technology-based solutions. Philosophically, he is invested in furthering conversation around one of the highly contemplated topics of our time; artificial intelligence.

Abstract:

There does not exist a singularity corollary that proposes all artificial life forms will be designed and commissioned with humanity’s best interests in mind. Humanity will most likely face the singularity of artificial life during the first half of this century. As the singularity approaches, humanity must not then assume that all artificial life forms would behave in an acceptable manner. More specifically, in a manner that does not jeopardize humanity itself. Establishing oversight on artificial life forms is necessary for the anticipated and not yet imagined variants of artificial life forms to come, to help ensure the continuity of humanity. The peoples of the world require assurance through oversight of artificial life forms as humanity approaches the singularity. Establishing a functioning governing body that best represents the interests of humanity while being nimble enough to adapt and action, when tasked, is a daunting but essential burden for the world. A required task would be to build an inclusive governing body that spans philosophical, geographical, religious, cultural, political, industrial and technological boundaries and acts on behalf of, and in the best interests of humanity. Individual human rights require reflection but lie in the shadow of this far greater need which is to contemplate safeguards for the human race. If a scientific study can propose and prove that humanity is not at risk by the dawn of the technological singularity through the creation artificial life forms, then such a governing body is less compulsory. If a risk to humanity exists, a governing body to protect the interests of humanity is imperative. If a risk exists, through the recognition of this new paradigm where artificially created life forms with cognitive abilities greatly out pacing that of their biological human life form, then action to govern is required.

Speaker
Biography:

Esteve Almirall is an Associate Professor at ESADE Business School, Ramon Llull University, Spain. He holds a PhD in Management Sciences (ESADE). Most of his career has been devoted to information technologies, especially in consulting, banking and finances where he worked for more than 20 years in executive and board level positions in IS, organization and marketing. As an entrepreneur, he actively participated and founded several start-ups in the field. Moreover, he has an MBA, a PDD from IESE a Diploma in Marketing from UC Berkeley and a GCPCL Diploma from Harvard B.S. He is passionate about the intersection between technology and innovation, very active in fields such as smart cities, innovation ecosystems, innovation in the public sector. He serves as Academic Director of the Master in Business Analytics and Director of the Center for Innovation in Cities.

Abstract:

On May 2005, Davenport Cohen and Jacobson published a working paper that a few months later appeared in HBR: Competing on analytics, one year later the book was released. This research was both chronicle and trigger of what has been called the analytics revolution or the data revolution. New positions have been invented such as data scientist and data engineers and a whole new cycle of hype around machine learning in particular and AI in general started. Analytics is presented now as the modern source of competitive advantage, is that so? Models allowed us to understand the world through the use of stylization and abstraction, that way we were able to precisely delineate ideas and complex systems that were later implemented in the real world. The Holy Grail has always been to be able to translate these models into fully automated systems with little need of human intervention and plastic enough to allow experimentation and rapid change. Advances in Cloud, AI and IT are making this dream real. Platforms such as Amazon, Uber, Facebook, Instagram or WhatsApp are examples of this, fully automated models. The implication are numerous, practically infinite scalability, zero marginal cost and no decreasing returns on scale together with the extensive use of network effects and feedback loops among others. Competition moved first from execution to analytics and now to model implementation. With it, the competences needed in companies to successfully compete in this new environment together with the meaning of R&D and experimentation changed dramatically. Innovation has always been a key driver of progress and growth, but now in environments with total plasticity and perfect execution, innovation is more relevant than ever. This new world is however full of challenges, code glitches, bandwagon effects, strange emergent behaviors are some of the unintended consequences of systems that act and decide on their own with non-human logics.

Leif Nordlund

NVIDIA Nordics, Sweden

Title: NVIDIA research: Pushing AI forwards
Speaker
Biography:

Leif Nordlund is a veteran in the HPC and datacenter business where he has been with companies such as Sun Microsystems, AMD and now NVIDIA. Between 2013 and 2015 he was at KTH University to develop the collaboration between industry and academic research. With a background in AI and Computer Science from Uppsala University he is currently in a position where he can combine the knowledge from these exciting research areas together with the research community in the Nordics.

Abstract:

Self-driving cars is currently a hot research topic. Deep learning revolutionized computer vision and is behind the rapid progress in the field of autonomous vehicles. NVIDIA is a key player in the area of self-driving cars and provides both hardware (NVIDIA DRIVE) and software platforms (DriveWorks) as support for the development of autonomous vehicles. NVIDIA GPUs also allow training deep neural networks significantly faster compared to any other means. Cars are expected to become computer on wheels as reaching full autonomy (e.g. Level-5) will require significant unprecedented amount of computing power in a vehicle. At NVIDIA Helsinki, our deep learning engineers – as part of our global R&D effort on autonomous vehicles – focus on obstacle perception for self-driving cars. R&D ranges from object detection, lane detection, free-space segmentation and depth estimation, based on multiple sensors such e.g. as cameras and Lidar.

Speaker
Biography:

Andrea Burgio has an expertise in solutions engineering built over sound academic bases and decades of hands-on international business experience. With formal studies in Statistics, Computer Science, Artificial Intelligence, Business Administration and International Finance between Italy, UK and France, and executive positions in Software, Engineering and Services corporations, he developed a holistic and humanistic touch in change and project management. Teaching in engineering and business schools or counseling for private ventures and government bodies, he constantly focuses on empowering students and professionals into responsible future-crafters. For his academic research contributions in Statistical Models for Decision Making (Italian Council of National Research, 1992) and the track record of industrial and public policy modeling for global institutions and corporations, Andrea has been appointed in 2017 to the Board of IMT (Institut Mines-Télécom), a French public institution dedicated to higher education, research and innovation with 14.000 graduate students, 1.300 researchers and 1.600 PhD.

Abstract:

Uncertainty has become a commodity and a must-have of any political agenda and corporate strategy. Nothing new since it has always been blamed or praised for anything from climate disasters to commodity prices volatility and from social unrest to industrial systems failures. Academics, policy makers and corporate executives have being rocking their brains for millennia to manage uncertainty and limit risk factors, while selling remedies bundled to the appropriate dose of fear. Fuzzy multiphase Markov chains and other stochastic models had their ups and downs and even made it more or less subtly into everyday must-haves like search engines and social augmentation tools; fortunes and misfortunes were built on modeling and engineering financial interactions and many non-PhD-bearing commoners now can even make a living out of sharing their empirical understanding of the world’s complexity. Have we collectively reached the limits of our analog representation of reality? Are we emotionally delaying the quantum leap? These and other less-compelling issues are the everyday unconscious lot of entrepreneurs and political actors and shape most of today’s human interactions, from multilateral trade agreements to the raise and fall of leaders and nations, in an astounding swirl of unprecedented interconnectedness. Science, especially the newly rebranded Data one, can contribute both to the exponential spread of the fear factors as to the empowerment of all the members of our multi-meshed societies, leading potentially to both ends of the cat-in-the-box situation. What we make of it all is a choice that belongs to no-one in particular and everyone at the same time. Dealing with and in Uncertainty can be a rewarding experience while bearing a fearful and anxious weight that keeps us alert, humble and curious. While tools and models do help in decision making, collective and individual intelligence shall remain at both ends of the process.

  • Cognitive Computing | Natural Language Processing | Percetrons | Backpropagation
Location: Laine
Speaker
Biography:

Educators, students, employers and employees are inundated with big data. They are seeking relief. AI provides the bridge between big data and personalized data using NLP and GANN. In recent years Natural Language Processing (NLP) and Genetic Algorithm Neural Networks (GANN) have entered to add, “Data into Knowledge” (DiK) solutions. Research with NLP and GANN has enabled tools to be developed that selectively filters big data and combine this data into micro self- reinforcement and personalized gamification of any DiK in Dynamic real-time. Is the combination of GA, NLP, MSRL and dynamic Gamification that has enabled people to experience relieve in their quest to turn DiK 32% better, faster and easier and with more confidence over traditional learning methods.

Abstract:

Erwin E Sniedzins is the President of Mount Knowledge Inc, Toronto, Canada. The company is a Global Leader in AI, neural networks, automatic gamification of any textual data and micro reinforcement learning. He has patented the Knowledge Generator™ (KG), which is an artificial intelligence application that takes any digitized textual content and automatically creates a MicroSelf-Reinforcement Learning and Personalize Gamification of this content into lessons, exercises and tests with scores and marks in Dynamic real-time. The KG technology enables people to turn Data into Knowledge (DiK) 32% better, faster and easier with more confidence and fun. No teacher or course designer is required. He is the author and has published 12 books. He is also a Professor at Hebei University, Canada.

Speaker
Biography:

Mark Strefford is the Founder of Timelaps AI Limited, UK. He is an experienced Technology Leader in delivering innovative technologies to leading national, international private and public sector organizations.

 

Abstract:

Introducing AI and ML technologies into the enterprise, as with any IT project, has a number of challenges. A number of these are often comparable to the delivery of a standard IT project, such as ensuring there is a clear business outcome, a defined scope for delivery, stakeholder engagement, a suitably qualified team and the ability to measure progress as the project continues. However, there are subtle differences that if not addressed or the business and technology stakeholders are not made aware, can increase the risk, cause issues during delivery and ultimately and potentially cause the project to fail. With a muchhyped technology that is immature in its wide-scale enterprise adoption such as AI, a high-profile failure can set an organization back in its appetite to adopt this technology again in the near to medium term. The aim is to highlight some of the issues that are likely to arise, from understanding the organizations strategy for AI, structuring a project delivery approach, engagement with users, ensuring that you have the right mix of resources to deliver, how to integrate with wider systems, security, privacy and data controller concerns and subsequently some proven approaches to take in addressing these.

Speaker
Biography:

Gilberto Batres-Estrada has received MSc in Theoretical Physics and MSc in Engineering with specialization in Applied Mathematics and Statistics. He works as a Consultant Data Scientist and his domain of expertise is deep learning. He also conducts independent research in deep learning and reinforcement learning in finance with researchers at Columbia University, NY, USA. He has also made contributions to the book Big Data and Machine Learning in Quantitative Investment, with focus on long short term memory networks. Previously he has worked in finance as Quantitative Analyst and also worked building trading algorithms for a hedge fund.

Abstract:

A Recurrent Neural Network (RNN) is a type of neural network suited for sequential data. Its recurrent connections from one time step to the next introduces a depth in time, which in theory, is capable of learning long term dependencies. There are however two serious issues with vanilla RNNs, the first is the problem of exploding gradients, where the gradient grows without bound leading to instabilities in the system. Growing gradients can in part be alleviated by a technique known as clipping gradients. The second problem is that of diminishing gradients, which has no solution for the vanilla RNN, but for some special cases. The Long Short Term Memory Network (LSTM) is a type of RNN designed to solve both of these issues, experienced by RNNs during the training phase. The solution lies in replacing the regular neural units in an RNN with units called memory cells. An LSTM memory cell is composed of four gates controlling the flow of the gradients which dynamically respond to the input by changing the internal state according to the long term interactions in the sequence. The LSTM has been shown to be very successful at solving problems in speech recognition, unconstrained handwritten recognition, machine translation, image captioning, parsing and lately in prediction of stock prices and time series prediction. To combat over-fitting, techniques such as dropout have become standard when training these models. We start by presenting the RNN’s architecture and continue with the LSTM and its mathematical formulation. This study also focuses on the technical aspects of training, regularization and performance.

Speaker
Biography:

Fire related disasters are the most common type of emergency situation which requires thorough analysis of the situation required for a quick and precise response. The damage due to fire is quite substantial. The number of deaths due to fire related emergencies is around 18,000 per year. Apart from this, the economic loss is quite considerable. Forest fires in Northern California last year resulted in $65 billion in property damages. Emergency response is the most critical part of disaster management which deals with the organization, allocation and deployment of resources in the most efficient manner with the sole purpose of protecting public health and minimizing damage caused by a natural or human-made disaster and brings the situation under control. We intend to develop and apply novel AI algorithms, strategies and methodologies to fire emergency situations and outperform previous state-of-the-art models to achieve higher performance, precision and efficiency. The main aim is to develop a system that can detect fire as soon as possible, analyze it and the surroundings thoroughly and finally plan the evacuation of all people which should result in as little damage as possible; all this would be done in less than 5 seconds. Such an accurate, versatile and fast system requires many different sub-fields of deep learning to come together. It will employ image recognition and classification with CNNs, Object detection and Image segmentation with R-CNNs and Deep Reinforcement Learning with DQNs and Policy Gradients. The first step of building fire management system was successfully accomplished. It would be quite malleable and can be easily modified for other applications. Even though it will be large and complex, but it’s tractability won’t be compromised because it’s divided into several modules. It will be the first end-to-end trainable Deep Neural Network emergency management system.

Abstract:

Speaker
Biography:

Patricio Julian Gerpe is Co-founder Argentine AI Community (IA AR), an award-winning serial technological entrepreneur with more than 7 years in the startup scene. He is the main developer at Enpov, start-up awarded to top 20 out of +30,000 projects at UN-backed Hult Prize competition. He is an advocate for open AI and collaborative data science who had been interviewed by PhD podcasts and think tank’s discussions in top-tier multinationals. He is also TEDx speaker, content creator, startup mentor, full-stack developer specialized in process automation and intelligent agents, already featured on TV and mass media.

Abstract:

Statement of the Problem: Female rural merchants in Ethiopia lose at least 30% of their daily income due to inaccessibility of current on-demand transportation infrastructure. These losses are driven by (A) Time lost on waiting for transportation, (B) Constraints in the capacity they can carry, (C) Lack of access to most crowded markets. The purpose of this study is to assess the responsiveness of income for female rural merchants to the introduction of an agent-based computer program that connects them with the supply of on-demand transportation and finds an optimal fare. Specifically, the program is aimed to simulate the decision-making process behind the experience of going to markets to sell goods in Ethiopian markets. Methodology & Theoretical Orientation: Firstly, an ethnographic study was utilized during participant observation, in-depth interviews and focus groups to assess the current situation of the problem. Secondly, qualitative information from these interviews such as pain points and gain points of the experience to look for on-demand transportation was illustrated in a value proposition map, then scored, ranked and quantified based on interviews results to represent utility. Geolocalization data points were taken as inputs as well. The experience is tested twice with and without the application. Findings: The software application on average seemed to increase income for rural merchants in a range of 30%-50%. However, it is still premature to determine how likely is the intelligent-agent to affects merchants’ income. A much bigger subject sample and arrange of data points are still needed to determine the effectiveness of this application. Conclusion & Significance: The character of this research is preliminary
and necessary to further research on this topic. Part of this work will be open sourced to the scientific community, aiming to encourage the exploration of new applications of agent-based software in assisting high-risk communities to make smarter decisions.

Biography:

John Kontos has a PhD from Birmingham University in Computers. He has published over 100 papers, among other topics, in Natural Language Processing, Deductive Question Answering, Explanation Systems and Machine Learning. He has headed the design and construction of the first Greek digital general purpose computer. He has served as Full Professor of AI at the Economic University of Athens and at the University of Athens.  

 

Abstract:

The poster is proposed to conceptualize different Machine Learning Methods, e.g. the ANN-based ones, as equivalent to different String Matching methods for retrieving from a table of examples. The aim of the proposal is to clarify what Machine Learning methods can be reduced to in order to avoid the anthropomorphic concept of Machine Learning. Such a reduction may also help in converting “learned” systems to “explainable” systems that enhance trust. The table below shows a correspondence between String Matching methods and Machine Learning methods.

String Matching method                              Machine Learning method

1.     Sequential Serial

2.     Non-sequential Serial                                      (Decision Trees (DT))

3.     Weighted Total Parallel                                              (Single ANN)

4.     Weighted Partial Parallel                                           (Multiple ANNs)

5.     Hierarchical Weighted Partial Parallel                    (“Deep” ANNs)

First we notice that Method No.2 differs from No.1 in the different order of symbol by symbol matching procedure. This change of order by computing a DT (e.g. ID3) is meant to reduce the computing effort necessary for matching between a set of examples and a string under test. Since methods Nos 3 to 5 according to recent literature may be reduced to DTs we may consider them as variants of serial string matching. Future work must analyze the “unseen” example cases.