Scientific Program

Conference Series Ltd invites all the participants across the globe to attend 6th Global Summit on Artificial Intelligence and Neural Networks Helsinki, Finland.

Day 2 :

Keynote Forum

Boris Stilman

University of Colorado & STILMAN Advanced Strategies, USA

Keynote: Introduction to the primary language: Discovering the algorithm of discovery

Time : 09:30-10:10

Conference Series Neural Networks 2018  International Conference Keynote Speaker Boris Stilman photo
Biography:

Boris Stilman is a Professor of Computer Science at the University of Colorado Denver, USA and the Chairman and CEO of STILMAN Advanced Strategies, LLC, USA. In 1972-1988, in Moscow, USSR, he was involved in the advanced research project PIONEER led by a former World Chess Champion Professor Mikhail Botvinnik.

Abstract:

This study emphasizes recent results of my research related to the primary language of the human brain and its applications. Our hypothesis is that the primary language is the language of visual streams, i.e., it is based on mental dynamic images (movies), visual streams, so that those streams drive reasoning, reading and writing, translation and serve as a different form, the foundation, for all the sciences (following von Neumann suggestion). Note that, according to this hypothesis, the primary language is not a language in mathematical sense. This study is interested in revealing the detailed nature of the primary language by investigating ancient algorithms, crucial for development of humanity. It is likely that such ancient algorithms are powered by the primary language directly and thus, utilize symbolic reasoning on a limited scope. Our contention is that the hypothetical Algorithm of Discovery (AD) must be one of such algorithms. Yet another such algorithm is Linguistic Geometry (LG), which is a type of game theory, that generates detailed intelligent courses of action (predict behavior) of all sides in a conflict with accuracy exceeding the level of human experts. In a way, we use a Rosetta Stone approach by investigating three items together, the primary language, LG and the AD.

Keynote Forum

Steve Wells

Fast Future Publishing Ltd, UK

Keynote: Going beyond genuine stupidity to ensure ai serves humanity

Time : 10:10-10:50

Conference Series Neural Networks 2018  International Conference Keynote Speaker Steve Wells photo
Biography:

Steve Wells is a global futurist, experienced speaker, strategist and facilitator. He helps delegates and clients understand the key future factors driving innovation, growth and disruptive change, highlighting the new thinking and business models being enabled by exponential technologies such as AI, immersive technologies like augmented and virtual reality and hyper-connectivity. He has a professional background as an independent Strategy Consultant and Researcher and as a Strategy Development Manager in Pfizer’s UK business where he worked across C-suite and Senior Manager Level to lead and facilitate the company’s strategic planning process.

Abstract:

Almost every new technology arrives with a fan base claiming it will revolutionize life on Earth. For some, AI is just one more in a long list of over-hyped technologies that won’t live up to its promise. At the other end of the spectrum are those who believe this could literally be the game changing invention that reshapes our world. They argue that humanity has a directional choice: Do we want the transformation to enable an unleashing of human potential, or lead us towards the effective end of life on the planet? We believe AI is like no technology that has gone before, but we are far too early in its evolution to know how far and how rapidly this Fourth Industrial Revolution powered by smart machines might spread. What is clear is that, in real terms, although we are only in the very early stages of the AI revolution, we can already see that it has the potential to disrupt every aspect of our private and public lives. Indeed, AI is already having a dramatic impact across everything from medical diagnosis and construction to government decision making, financial services, and even dating sites. As the pace of development accelerates and AI’s potential becomes a little clearer, so the warnings grow ever-stronger about the threat to jobs and privacy and the risks of humanity becoming enslaved by the machine. In a fast-changing world with a rapidly changing reality, it is no surprise that as individuals, businesses and even governments, we are often only planning for the next month, quarter, or year. However, the far-reaching and seemingly limitless potential applications, impacts and implications of AI demand that we look more deeply into the opportunities and challenges that an AI-enabled future might bring. So the question is how can we ensure that we go beyond genuine stupidity in preparing for artificial intelligence?

Keynote Forum

Mihaela Ulieru

IMPACT Institute for the Digital Economy

Keynote: Personal analytic engines through blockchain: disrupting big data

Time : 11:00-11:40

Conference Series Neural Networks 2018  International Conference Keynote Speaker Mihaela Ulieru photo
Biography:

Mihaela Ulieru is an expert in distributed intelligent systems. For her scientific work which has positively impacted citizens in emerging and advanced economies including Asia Pac, North America and Europe she was awarded, among many others, the “Industrial Research Chair in Intelligent Systems” and the “Canada Research Chair in e-Society” and was appointed to numerous boards among which the Science Councils of Singapore, Canada and European Commission and to the Global Agenda Council of the World Economic Forum. She founded two research labs leading several international large-scale projects, among which: Organic Governance, Adaptive Risk Management, Self-organizing Security, Living Technologies and Emulating the Mind.

Abstract:

Recently Distributed Ledger Technologies (Blockchain) entered this space threatening to disrupt the big data industry through innovative initiatives such as Endor.com, Numerai.io or Matrix.io. These new kids on the (AI) block promise to democratize analytics through Blockchain and become, as in the case of Endor - the “Google of predictive analytics.” As opposed to Google however, which is a centralized platform, Endor promises a decentralized protocol called “Endor.coin” that enables any member of the community to contribute to its improvement and be rewarded in tokens, which in turn can be used to answer questions from an ever-enhancing prediction engine. A protocol enables something. Just like TCP/IP is the protocol that enables peer-to-peer exchange of files, and Blockchain is the protocol that enables the peer-to peer exchange of assets, Endor is proposing to be the protocol for the Internet of Predictions, enabling anyone to improve it by plugging in new prediction engines! Just like anyone can build new applications on Ethereum, anyone can use Endor to create new businesses, such as new blockchain enabled insurance models, predictive e-Health and personal medicine, optimized services for small businesses seeking to better use existing advertisement services, innovative marketing models on blockchain, and so on. The key to all of this will be building a vibrant community of users and contributors. On centralized platforms, community happens through “if you build it [the platform], they will come.” On blockchain, the motto is “if they come, they will build it”, and bringing contributors is done by giving them tokens. We typically think of such tokens as securities to be liquidated, but Endor is promising a new method: the tokens (EDR) will not be liquidated but rather act as a utility to access the overlaying services, such as data or predictive models. Endor plans to build its capabilities by committing 60% of all tokens to contributors: entrepreneurs (called “catalysts”) that will build businesses using the platform will receive 25%, researchers that will build the algorithms used in Endor’s library will receive 15%, and strategic partners such as Bancor and ORBS that will maximize distribution of the Endor coin will receive 20% of the total amount of tokens. A crucial factor to the success of such decentralized endeavors is an appropriate allocation of tokens that ensures the right mix of stakeholders in the ecosystem. The creation of analytics engines on blockchain is a revolution in the way we think about and leverage big data. Using blockchain, a company like Endor can address many of the ills plaguing centralized data platforms, by ensuring: (1) data sovereignty, letting users choose to provide their data in exchange for tokens; and (2) accountability, giving full transparency to the user as to what data and what engine was used to make a prediction.

  • Cognitive Computing | Natural Language Processing | Percetrons | Backpropagation
Location: Laine
Speaker
Biography:

Educators, students, employers and employees are inundated with big data. They are seeking relief. AI provides the bridge between big data and personalized data using NLP and GANN. In recent years Natural Language Processing (NLP) and Genetic Algorithm Neural Networks (GANN) have entered to add, “Data into Knowledge” (DiK) solutions. Research with NLP and GANN has enabled tools to be developed that selectively filters big data and combine this data into micro self- reinforcement and personalized gamification of any DiK in Dynamic real-time. Is the combination of GA, NLP, MSRL and dynamic Gamification that has enabled people to experience relieve in their quest to turn DiK 32% better, faster and easier and with more confidence over traditional learning methods.

Abstract:

Erwin E Sniedzins is the President of Mount Knowledge Inc, Toronto, Canada. The company is a Global Leader in AI, neural networks, automatic gamification of any textual data and micro reinforcement learning. He has patented the Knowledge Generator™ (KG), which is an artificial intelligence application that takes any digitized textual content and automatically creates a MicroSelf-Reinforcement Learning and Personalize Gamification of this content into lessons, exercises and tests with scores and marks in Dynamic real-time. The KG technology enables people to turn Data into Knowledge (DiK) 32% better, faster and easier with more confidence and fun. No teacher or course designer is required. He is the author and has published 12 books. He is also a Professor at Hebei University, Canada.

Speaker
Biography:

Mark Strefford is the Founder of Timelaps AI Limited, UK. He is an experienced Technology Leader in delivering innovative technologies to leading national, international private and public sector organizations.

 

Abstract:

Introducing AI and ML technologies into the enterprise, as with any IT project, has a number of challenges. A number of these are often comparable to the delivery of a standard IT project, such as ensuring there is a clear business outcome, a defined scope for delivery, stakeholder engagement, a suitably qualified team and the ability to measure progress as the project continues. However, there are subtle differences that if not addressed or the business and technology stakeholders are not made aware, can increase the risk, cause issues during delivery and ultimately and potentially cause the project to fail. With a muchhyped technology that is immature in its wide-scale enterprise adoption such as AI, a high-profile failure can set an organization back in its appetite to adopt this technology again in the near to medium term. The aim is to highlight some of the issues that are likely to arise, from understanding the organizations strategy for AI, structuring a project delivery approach, engagement with users, ensuring that you have the right mix of resources to deliver, how to integrate with wider systems, security, privacy and data controller concerns and subsequently some proven approaches to take in addressing these.

Speaker
Biography:

Gilberto Batres-Estrada has received MSc in Theoretical Physics and MSc in Engineering with specialization in Applied Mathematics and Statistics. He works as a Consultant Data Scientist and his domain of expertise is deep learning. He also conducts independent research in deep learning and reinforcement learning in finance with researchers at Columbia University, NY, USA. He has also made contributions to the book Big Data and Machine Learning in Quantitative Investment, with focus on long short term memory networks. Previously he has worked in finance as Quantitative Analyst and also worked building trading algorithms for a hedge fund.

Abstract:

A Recurrent Neural Network (RNN) is a type of neural network suited for sequential data. Its recurrent connections from one time step to the next introduces a depth in time, which in theory, is capable of learning long term dependencies. There are however two serious issues with vanilla RNNs, the first is the problem of exploding gradients, where the gradient grows without bound leading to instabilities in the system. Growing gradients can in part be alleviated by a technique known as clipping gradients. The second problem is that of diminishing gradients, which has no solution for the vanilla RNN, but for some special cases. The Long Short Term Memory Network (LSTM) is a type of RNN designed to solve both of these issues, experienced by RNNs during the training phase. The solution lies in replacing the regular neural units in an RNN with units called memory cells. An LSTM memory cell is composed of four gates controlling the flow of the gradients which dynamically respond to the input by changing the internal state according to the long term interactions in the sequence. The LSTM has been shown to be very successful at solving problems in speech recognition, unconstrained handwritten recognition, machine translation, image captioning, parsing and lately in prediction of stock prices and time series prediction. To combat over-fitting, techniques such as dropout have become standard when training these models. We start by presenting the RNN’s architecture and continue with the LSTM and its mathematical formulation. This study also focuses on the technical aspects of training, regularization and performance.

Speaker
Biography:

Fire related disasters are the most common type of emergency situation which requires thorough analysis of the situation required for a quick and precise response. The damage due to fire is quite substantial. The number of deaths due to fire related emergencies is around 18,000 per year. Apart from this, the economic loss is quite considerable. Forest fires in Northern California last year resulted in $65 billion in property damages. Emergency response is the most critical part of disaster management which deals with the organization, allocation and deployment of resources in the most efficient manner with the sole purpose of protecting public health and minimizing damage caused by a natural or human-made disaster and brings the situation under control. We intend to develop and apply novel AI algorithms, strategies and methodologies to fire emergency situations and outperform previous state-of-the-art models to achieve higher performance, precision and efficiency. The main aim is to develop a system that can detect fire as soon as possible, analyze it and the surroundings thoroughly and finally plan the evacuation of all people which should result in as little damage as possible; all this would be done in less than 5 seconds. Such an accurate, versatile and fast system requires many different sub-fields of deep learning to come together. It will employ image recognition and classification with CNNs, Object detection and Image segmentation with R-CNNs and Deep Reinforcement Learning with DQNs and Policy Gradients. The first step of building fire management system was successfully accomplished. It would be quite malleable and can be easily modified for other applications. Even though it will be large and complex, but it’s tractability won’t be compromised because it’s divided into several modules. It will be the first end-to-end trainable Deep Neural Network emergency management system.

Abstract:

Speaker
Biography:

Patricio Julian Gerpe is Co-founder Argentine AI Community (IA AR), an award-winning serial technological entrepreneur with more than 7 years in the startup scene. He is the main developer at Enpov, start-up awarded to top 20 out of +30,000 projects at UN-backed Hult Prize competition. He is an advocate for open AI and collaborative data science who had been interviewed by PhD podcasts and think tank’s discussions in top-tier multinationals. He is also TEDx speaker, content creator, startup mentor, full-stack developer specialized in process automation and intelligent agents, already featured on TV and mass media.

Abstract:

Statement of the Problem: Female rural merchants in Ethiopia lose at least 30% of their daily income due to inaccessibility of current on-demand transportation infrastructure. These losses are driven by (A) Time lost on waiting for transportation, (B) Constraints in the capacity they can carry, (C) Lack of access to most crowded markets. The purpose of this study is to assess the responsiveness of income for female rural merchants to the introduction of an agent-based computer program that connects them with the supply of on-demand transportation and finds an optimal fare. Specifically, the program is aimed to simulate the decision-making process behind the experience of going to markets to sell goods in Ethiopian markets. Methodology & Theoretical Orientation: Firstly, an ethnographic study was utilized during participant observation, in-depth interviews and focus groups to assess the current situation of the problem. Secondly, qualitative information from these interviews such as pain points and gain points of the experience to look for on-demand transportation was illustrated in a value proposition map, then scored, ranked and quantified based on interviews results to represent utility. Geolocalization data points were taken as inputs as well. The experience is tested twice with and without the application. Findings: The software application on average seemed to increase income for rural merchants in a range of 30%-50%. However, it is still premature to determine how likely is the intelligent-agent to affects merchants’ income. A much bigger subject sample and arrange of data points are still needed to determine the effectiveness of this application. Conclusion & Significance: The character of this research is preliminary
and necessary to further research on this topic. Part of this work will be open sourced to the scientific community, aiming to encourage the exploration of new applications of agent-based software in assisting high-risk communities to make smarter decisions.

Biography:

John Kontos has a PhD from Birmingham University in Computers. He has published over 100 papers, among other topics, in Natural Language Processing, Deductive Question Answering, Explanation Systems and Machine Learning. He has headed the design and construction of the first Greek digital general purpose computer. He has served as Full Professor of AI at the Economic University of Athens and at the University of Athens.  

 

Abstract:

The poster is proposed to conceptualize different Machine Learning Methods, e.g. the ANN-based ones, as equivalent to different String Matching methods for retrieving from a table of examples. The aim of the proposal is to clarify what Machine Learning methods can be reduced to in order to avoid the anthropomorphic concept of Machine Learning. Such a reduction may also help in converting “learned” systems to “explainable” systems that enhance trust. The table below shows a correspondence between String Matching methods and Machine Learning methods.

String Matching method                              Machine Learning method

1.     Sequential Serial

2.     Non-sequential Serial                                      (Decision Trees (DT))

3.     Weighted Total Parallel                                              (Single ANN)

4.     Weighted Partial Parallel                                           (Multiple ANNs)

5.     Hierarchical Weighted Partial Parallel                    (“Deep” ANNs)

First we notice that Method No.2 differs from No.1 in the different order of symbol by symbol matching procedure. This change of order by computing a DT (e.g. ID3) is meant to reduce the computing effort necessary for matching between a set of examples and a string under test. Since methods Nos 3 to 5 according to recent literature may be reduced to DTs we may consider them as variants of serial string matching. Future work must analyze the “unseen” example cases.