The emergence of Intelligence Science (IS) is underpinned by the advances in intelligence theories and novel mathematical means beyond traditional analytic ones in the domain of R. IS has been a pinnacle and a lasting frontier of Abstract Sciences (AS) building upon data (sensory), information, knowledge, and system sciences parallel to conventional concrete sciences. Abstract Intelligence (aI, alpha-I) is the general and mathematical form of intelligence that describes the universe of discourse of natural and artificial intelligence as well as their symbolic structures and behaviors. This keynote presents an overview of IS, particularly its aI foundations. Contemporary paradigms of aI will be demonstrated including e-brains, brain-inspired systems, cognitive systems, autonomous systems, unmanned systems, cognitive robots, hybrid human-machine systems, joint human-machine task-forces, cognitive knowledge bases, unsupervised neural networks, cognitive learning machines, and intelligent IoTs.
Yingxu Wangis professor of cognitive systems, brain science, software science, and denotational mathematics. He is the Founding President of International Institute of Cognitive Informatics and Cognitive Computing (ICIC, http://www.ucalgary.ca/icic/). He is Fellows of BCS, ICIC and WIF, P.Eng of Canada, and Senior Members of IEEE and ACM. He has held visiting professor positions at Oxford University (1995), Stanford University (2008 | 2016), UC Berkeley (2008), and MIT (2012), respectively. He received a PhD in Computer Science from the Nottingham Trent University, UK, in 1998 and has been a full professor since 1994. He is the founder and steering committee chair of the annual IEEE International Conference on Cognitive Informatics and Cognitive Computing (ICCI*CC) since 2002. He is founding Editor-in-Chiefs of Int’l Journal of CognitiveInformatics & Natural Intelligence, Int’l Journal of Software Science & Computational Intelligence, and Journal of Mathematical & Computational Methods. He is Associate Editor of IEEE Trans. on Cognitive and Development Systems (TCDS) and IEEE Computer Society Representative to the steering committee of TCDS. Dr. Wang is the initiator of a few cutting-edge research fields such as cognitive informatics, denotational mathematics (concept algebra, process algebra, system algebra, semantic algebra, inference algebra, big data algebra, fuzzy truth algebra, fuzzy probability algebra, fuzzy semantic algebra, visual semantic algebra, and granular algebra), abstract intelligence (aI), the spike frequency modulation (SFM) theory, mathematical models of the brain, cognitive computing systems, cognitive learning engines, and the cognitive knowledge base theory. His work and basic studies have been across contemporary disciplines of intelligence science, robotics, knowledge science, computer science, information science, brain science, system science, software science, data science, neuroinformatics, cognitive linguistics, computational intelligence, and engineering systems. He has published 490+ peer reviewed papers and 36 books in aforementioned transdisciplinary fields. He has presented 44 invited keynote speeches in international conferences. He has served as general chairs or program chairs for 28+ international conferences. He has led 10+ international, European, and Canadian research projects as PI by intensive collaborations with renowned peers and leading industrial partners. He is the recipient of dozens international awards on academic leadership, outstanding contributions, best papers and teaching in the last three decades. He is a top 2.5% scholar worldwide and top 10 at University of Calgary according to Research Gate’s international statistics.
We are entering an exciting era where human intelligence is being enhanced by machine intelligence through big data fueled artificial intelligence (AI) and machine learning (ML). However, recent work shows that prediction models trained privately arevulnerable to adversarial examplesand privacyinvasion, both of which turnAI and ML against itself through inference attacks and bothmaliciously manipulate the prediction outputs with onlya black box access to a machine learning as a service API. We argue that the trustworthiness should be an essential and mandatory component of a deep learning system for algorithmic decision making. This includes (1) the understanding and the measurement of the level of trust and/or distrust that we place on a deep learning algorithm to perform reliably and truthfully, and (2) the development of formal metrics to formally and quantitatively evaluate and measure the trust level of an algorithmic decision making result by examining the trustworthiness of the algorithm with respect of intentional and unintentional effects of execution, in the presence of different adversarial settings.In this talk, I will share some of our empirical results and characterization on trust and privacy of deep learning in adversarial settings.
Prof. Dr. Ling Liu is a Professor in the School of Computer Science at Georgia Institute of Technology. She directs the research programs in Distributed Data Intensive Systems Lab (DiSL), examining various aspects of large-scale data intensive systems. Prof. Liu is an internationally recognized expert in the areas of Big Data Systems and Analytics, Distributed Systems, Database and Storage Systems, Internet Computing, Privacy, Security and Trust. Prof. Liu has published over 300 international journal and conference articles, and is a recipient of the best paper award from a number of top venues, including ICDCS 2003, WWW 2004, 2005 Pat Goldberg Memorial Best Paper Award, IEEE CLOUD 2012, IEEE ICWS 2013, ACM/IEEE CCGrid 2015, IEEE Edge 2017. Prof. Liu is an elected IEEE Fellow and a recipient of IEEE Computer Society Technical Achievement Award. Prof. Liu has served as general chair and PC chairs of numerous IEEE and ACM conferences in the fields of big data, cloud computing, data engineering, distributed computing, very large databases, World Wide Web, and served as the editor in chief of IEEE Transactions on Services Computing from 2013-2016. Currently Prof. Liu is co-PC chair of The Web 2019 (WWW 2019). Prof. Liu’s research is primarily sponsored by NSF, IBM and Intel.
Contemporary robot control methodologies have their origin in the classical control systems design. Except forthe increased adoption of modern computers in the hardware implementation, the design and analysis has been limited to the original knowhow. When the robotics field covered operations in mostly structured environments there was limited need to explore complexity beyond the standard requirements of high speed and high repeatability. These along with the payload capacity, robot weight and power requirements provided the necessary specificationsthat could be met for most applications. The situation started to change when the environments became unstructured and not fully known. This was brought up by the emergence of mobile robot applications. It gave rise to requirements that could not be satisfied with the contemporary control systems methodologies. As a result, a new trust emerged, that of Artificial Intelligence. It provides tools to deal with situations that are basically complex, such as robots operating in unknown environments. The AI has provided an opportunity to use fundamental AI techniques to control systems operating in only partially known situations. There is a need to look at this quest from the critical point of view that AI serves well when the systems can be a priori properly trained or used in non-real time applications, ex. e-commerce, as opposed to robotics that requires real-time operation that poses limitations in the current use of AI.
Dr. Andrew Goldenberg PhD- founder (1982) of Engineering Services Inc (ESI). In May 2015 ESI has been acquired by a Chinese consortium located in Shenzhen, P.R. China. In 2016 the company went public in Hong Kong. Dr. Goldenberg is the CTO of the public company and of its wholly owned subsidiaryAnzer Intelligent Engineering Ltd. of Shenzhen, China, as well as the CEO of ESI. Since 1982 Dr. Goldenberg has also been a Professor of Mechanical and Industrial Engineering at the University of Toronto, cross appointed in the Institute of Biomaterials and Biomedical Engineering and theDepartmentofElectricalandComputerEngineering. Currently Dr. Goldenberg is a Professor Emeritus at the University of Toronto.
Robotics is an interdisciplinary branch of science and engineering that can be used to develop machinesreplicatinghuman capacities. For example, robots may replicate locomotion and hand dexterity. Robotics overlaps with Artificial Intelligence, often reduced to Machine Learning, in order tomimic higher human cognitive functions, such as language. Machine Learning often uses statistical techniques to provide computers the capacity to learn and make predictions or decisions on the basis of data. With supervised learning, the computer learns on the basis of a training set previously annotated by humans and the training model is evaluated on a test set. It is well known that performance decreases when the training set is not large enough, in which case Reinforcement Learning can be used to extend it. Most research and applications in Natural Language Understanding rely on Skinner's model of human behavior. However, overwhelming evidence points to the fact that Behaviorism is not suited to model humans capacity for language. Human's are biologically predisposed to develop the languages to which they are exposed without formal instructions or extensive training on large datasets. Consequently, Behaviorist-based applications in Robotics and Artificial Intelligence fail to understand natural language efficiently. In contra-distinction, machines replicating human capacity for language incorporate a generative procedure capable to compute and predict language creativity. The future challenge in Robotics and Artificial Intelligence is indeed the incorporation of the generative procedure underlying human knowledge of language in machines.
Anna Maria Di Sciullo completed her PhD from University of Montreal. She was at MIT in her post-doctoral studies and visiting scholar at MIT and Harvard subsequently. She received several honors and awards from Canada and Italy for her achievements in language science and technology. She is the director of the Interface asymmetry Lab. She has written several books and papers on the properties of human language and asymmetry based parsing, published by prestigious publishers including MIT Press and Oxford University Press. She created the International Network in Biolinguistics,a platform emulatingresearch and interactions between linguistics, computational linguistics and biologists.
We consider an emerging class of challenging networked multimedia applications called Real-Time Online Interactive Applications (ROIA). ROIA are networked applications connecting a potentially very high number of users who interact with the application and with each other in real time, i.e., a response to a user's action happens virtually immediately. Typical representatives of ROIA are multiplayer online computer games, advanced simulation-based e-learning and serious gaming. All these applications are characterized by high performance and QoS requirements, such as: short response times to user inputs (about 0.1-1.5 s); frequent state updates (up to 100 Hz); large and frequently changing numbers of users in a single application instance (up to tens of thousands simultaneous users). This talk will address two challenging aspects of future Internet-based ROIA applications: a) using Mobile Cloud Computing for allowing high application performance when a ROIA application is accessed from multiple mobile devices, and b) managing dynamic QoS requirements of ROIA applications by employing the emerging technology of Software-Defined Networking (SDN).
Sergei Gorlatch has been Full Professor of Computer Science at the University of Muenster (Germany) since 2003. Earlier he was Associate Professor at the Technical University of Berlin, Assistant Professor at the University of Passau, and Humboldt Research Fellow at the Technical University of Munich, all in Germany. Prof. Gorlatch has about 200 peer-reviewed publications in renowned international books, journals and conferences. He was principal investigator in several international research and development projects in the field of parallel, distributed, Grid and Cloud algorithms, networking and computing, as well as e-Learning, funded by the European Commission and by German national bodies. Among his recent achievements in the area of communications and future Internet is the novel Real-Time Framework (www.real-time-framework.com) developed in his group as a platform for high-level development of real-time, highly interactive applications for entertainment. In the area of networking, his group has been recently working in the pan-European project OFERTIE on an application-oriented Quality of Service approach for emerging Software-Defined Networks (SDN).
The current research and fast innovation and development in the field of Robotics, Internet of Things (IoT) and Artificial Intelligence (AI), in conjunction with the ubiquitous access to Internet, Smart Computational Devices (SCD), and Ultrafast Global Communication is excellent platform for Future Cyber Automation. The third millennium is a new era the Smart Fully Automated Cyberspace that is becoming pervasive in its nature while connecting the next generation of Ultra-smart Robotic Devices with the computationally powerful SCDs accessible to anyone, anywhere and at any time. In support of Smart Robotics, the telecommunications networks providers and SCDs developers, are working together to create much faster transmission channels with provision of higher quality of service for any multimedia content for anyone, anywhere at any time. The Human Machine Interface with high definition audio and video facilitates seamless control of Smart Robotics & Computational Devices (SRCD), which are becoming a common technology in family homes, business, academic, and business, and industry worldwide. The author discusses the current and future trends of research, innovation and developments in SRCD, Cyber Physical Critical Infrastructures (CPCI) and Cyber Assurance, in conjunction with the Future Ultra-Fast Internet and Ultra-SRCD. The author promotes creation of multidisciplinary multinational research teams and development of Next Generation SRCD and Fully Automated Environment while utilizing Ultra-Smart Robotic & Computational Devices, in conjunction with the critical Cyber Safety and Assurance challenges for today and for tomorrow.
Prof. Babulak is accomplished global scholar, consultant, educator, engineer and polyglot. He successfully published and presented his work worldwide. He was Invited Speaker at the University of Cambridge, MIT, Purdue University, Yokohama National University and University of Electro Communications in Tokyo, Japan, Shanghai Jiao Tong University, Sungkyunkwan University in Korea, Penn State in USA, Czech Technical University in Prague, University at West Indies, Graz University of Technology, Austria, and other prestigious institutions worldwide. His biography was cited in Cambridge Blue Book, Cambridge Index of Biographies, Stanford Who's Who, and number of issues of Who's Who in the World and America
Intelligent cognitive robotics, also commonly known as IT design of intelligent system of systems engineering, is an interdisciplinary field which deals with the discovery and design of new types of autonomous / swarm macro-, micro-, and nano-robots and computational intelligence toolkit. The development of robust intelligent control systems based on quantum soft computing (QSC) is one of important task of abovementioned IT design. QSC is a new paradigm of synergetic union of computational intelligence types. Quantum neural network, quantum genetic algorithm and quantum fuzzy system are the background of QSC. A new approach for deep machine learning and pattern recognition in intelligent cognitive robotics based on quantum neural network is one of example. The method of global optimization in control problems is considered on example of quantum genetic algorithm. In particular, the example of quantum genetic algorithm application for control of nonlinear "cart-pole" system is described. The possibility of applying neuro interface together with different types of regulators via the typical examples of controlling an autonomous intelligent vehicle is one of a new approach to the development of IT cognitive control systems design. There is an assessment of application possibilities of computational intelligence methods and means to improve the control system performance reliability. Quantum soft computing optimizer of knowledge bases (QSCOptKBTM) is the intelligent toolkit for knowledge extraction from experimental big data. Special attention is paid to the quantum fuzzy inference based on quantum genetic algorithm and on modified quantum Grover's algorithm. The aim of this report is to show experimentally the possibilities of cognitive interface effective application ("brain-computer-actuating device") on the example of motor vehicle driving (a mobile robot). The report also reveals modern management technologies application and shows the role and the necessity of computational intelligence in the operating "brain-computer" interface in order to improve the reliability and robustness of the advanced control system.
Sergey Ulyanov has completed his PhD from Moscow Institute of Physical-Technical Problems, Russia and postdoctoral studies from Bauman Technical University, Russia. He is the scientific director of PronetLabs, Co., a scientific consultant of Yamaha Motor Co., and ST Microelectronics. He has published more than 30 papers in reputed journals and has been serving as an editorial board member of Biomedical Engineering, Soft Computing, and Robotics and Mechatronics Journals.
Computational Intelligence (CI) aims to incorporate artificial intelligence using the principle of soft computing, a family consisting of biologically inspired techniques in their combined forms (such as genetic algorithm-evolved fuzzy reasoning tool, genetic algorithm-evolved neural networks, genetic algorithm-tuned neuro-fuzzy system, and others). CI has a great role to play in different areas of research in order to design and develop intelligent robots. For example, it can help to design and develop adaptive motion planner, adaptive controller, adaptive gait planner. The problems related to robot vision, multi-sensors data fusion can also be tackled using the principle of CI. Evolutionary robotics, a comparatively new field of robotic research, is the outcome of the above mentioned philosophy. In this lecture, starting from the fundamentals, research issues in the said areas will be highlighted.
Dilip Kumar Pratihar has completed Ph.D. from Indian Institute of Technology Kanpur, India and postdoctoral studies from Kyushsu Institute of Design, Fukuoka, Japan and then, Darmstadt University of Technology, Germany, under the Alexander von Humboldt Fellowship. He is the Head of the Centre for Robotics, IIT Kharagpur, India. He has published more than 130 papers in reputed journals only and has been serving as an editorial board member of 14 international journals. He has guided 18 Ph.D.s. He is Fellow of Institution of Engineers (I), Senior Member of IEEE, Member of ASME. He is a recipient of University Gold Medal.
The robotic agents are programmed in two rule based languages: QuLog and TeleoR, the latter being an application specific extension of QuLog.
The roots of TeleoR go back to the conditional action sequence plans of the first cognitive robot, SRI's Shakey of the late 1960s. These lead to Nilsson's Teleo-Reactive robotic agent language T-R. TeleoR is a major enhancement of T-R.
QuLog is a flexibly typed multi-threaded logic+function+imperative rule language. Its declarative subset is used for encoding the agent's dynamic beliefs and static knowledge. Its imperative rules are used for programming the agent's behavior.
The task threads are programmed using sequences of TeleoR guarded action rules clustered into parameterised procedures. The rule guards are QuLog queries to the agent's dynamic beliefs, optionally using the static knowledge rules. The rule actions are one or more robotic actions to be executed in parallel, or a single call to a TeleoR procedure, which could be a recursive call. Both may be paired with a sequence of QuLog belief update and communication actions.
We introduce the use of TeleoR and QuLog, and the multi-threaded agent architecture, with two robot control applications.
Two communicating and co-operating agents each controlling a mobile robot in a co-operative bottle collecting task with a target number of collected bottles. The agents communicate so each knows the current total of collected bottles and to avoid their robots colliding with minimal divergence from current paths. For the latter communicating is used to compensate for poor perception.
A multi-tasking agent controlling fairly sharing the use of a robotic arm in multiple concurrent construction tasks. The multi-tasking program is a slight modification of the single task program. Colleagues at UNSW Sydney have ported a slightly more complex two arm control version of the agent program to a Baxter robot, See https://www.doc.ic.ac.uk/~klc/20160127-LABCOT-HIx4.mp4 Some familiarity with the concepts of logic programming or Prolog is useful but not essential.
Keith Clark has two first degrees in Mathematics and Philosophy and a PhD in Computational Logic. He is one of the fifteen recognized Founders of the Logic Programming Research Area. His first significant publication was a text book, "Programs, Machines and Computation" (1975), written with a colleague. Being based on a novel approach to automata theory proposed by Dana Scott, It made AT more accessible to computer science students. It also covered program correctness, normal form results for programs and introduced partial recursive functions as an equational semantics for register machine flowchart programs. Since then his research has covered: theoretical results in computational logic (the logical explication of negation as failure 1978), design, implementation and use of new logic programming languages (IC-Prolog 1980, Parlog 1983, Qu-Prolog 2000, Go! 2002, QuLog 2013), other rule languages for programming multi-tasking communicating software agents (April 1993), and robotic control agents (TeleoR 2013). He has consulted for the Japanese Fifth Generation Project, Hewlett Packard, IBM, ICL and Fujitsu. He is a co-founder (1980) of the Prolog software and consultancy company Logic Programming Associates (LPA). Their star product was MacProlog, facilitating AI applications exploiting all the GUI features of the Apple Mac. He has taught at: Stockholm and Uppsala Universities in Sweden, Syracuse, UC Santa Cruz and Stanford Universities in the US, and the University of Queensland in Australia, He is currently an Emeritus Professor at Imperial, an Honorary Professor at UQ Brisbane, UNSW Sydney and UC London, and a Visiting Researcher at Stanford University.
Most nowadays e-commerce and tourism portals rely on recommender system approaches to provide end users with personalized recommendations on product or services (books, restaurants, places to visit, etc.) that they might like or potentially meeting their needs, thereby overcoming the nowadays humongous information overload in the Internet. Undeniably, recommender systems can help making better decisions when the amount of options available is enormous. The explosion of available sources of relevant data in recent years - e.g. preferential, contextual or social data - suggests that the problem of meaningfully fusing such disparate information to make reliable and more "intelligent" recommendations deserves further study. The application of aggregation strategies "beyond simple or weighted averaging" has been widely investigated in the areas of multi-criteria and group decision making, but its potential implications in recommender systems research has been far less explored to date. In this keynote, we firstly overview the main families of recommender system techniques, after which we highlight their increasingly common connections between multi-criteria and group/consensus decision making approaches and personalisation, e.g. (1) via recommender models combining various sources of information, and (2) group recommender systems oriented to groups of users with different individual preferences. Discussion is provided on how the properties and practical uses of aggregation processes in decision making approaches can be translated into a variety of popular recommender techniques and application domains. Applications on the speaker's current research are shown on leisure and tourism planning.
Ivan Palomares-Carrascosa (www.ivanpc.com) is a Lecturer in Data Science and Artificial Intelligence at the University of Bristol, UK. He currently leads his research group on 'Decision Support and Recommender Systems' at Bristol. His main research interests include data-driven and intelligent approaches for recommender systems, personalization for leisure and tourism in smart cities, large group decision making and consensus, data fusion, opinion dynamics and human-machine decision support, with participation in multiple international R&D projects. His results have been published in top journals and conference proceedings, including IEEE Transactions on Fuzzy Systems: Applied Soft Computing; Information Fusion, Knowledge-Based Systems; Data and Knowledge Engineering. He has recently authored a Springer book on 'Large Group Decision Making', and delivered talks, research seminars and summer courses at top institutions worldwide.
Man-made multi-robot systems have been advancing apace with the help of high-performance hardware and computational technologies. Despite the high-performance computing, communication, sensing, and power devices used in these systems, their effectiveness in uncertain environments appears to still fall behind the natural systems such as a swarm of ants, a flock of birds, or a team of wolves. One of the challenges in multi-robot coordination is the lack of effective distributed algorithms and designs that enable the robots to work cooperatively and safely in uncertain environments. This talk will present some recent research results on distributed algorithms and coordination control methods for cooperative multi-robot systems. The research on this topic has a wide range of potential engineering applications, including surveillance and search, intelligent transportation, environment monitoring, unmanned exploration of dangerous areas, and deployment and scheduling of sensor networks.
Prof. Guoqiang Hu joined the School of Electrical and Electronic Engineering at Nanyang Technological University, Singapore in 2011, and is currently a tenured Associate Professor and the Director of the Centre for System Intelligence and Efficiency. He was an Assistant Professor at Kansas State University, Manhattan KS, USA, from 2008 to 2011. He received the B.Eng. degree in Automation from the University of Science and Technology of China in 2002, the M.Phil. degree in Automation and Computer-Aided Engineering from the Chinese University of Hong Kong in 2004, and the Ph.D. degree in Mechanical Engineering from the University of Florida in 2007. His research focuses on analysis, control, design and optimization of distributed intelligent systems, with applications to cooperative robotics and smart city systems. He was a recipient of the Best Paper in Automation Award in the 14th IEEE International Conference on Information and Automation, and a recipient of the Best Paper Award (Guan Zhao-Zhi Award) in the 36th Chinese Control Conference. He serves as Associate Editor for IEEE Transactions on Control Systems Technology, Technical Editor for IEEE/ASME Transactions on Mechatronics, Associate Editor for IEEE Transactions on Automation Science and Engineering, and Subject Editor for International Journal of Robust and Nonlinear Control.
Non-healthcare industries have used a wide spectrum of energy-based systems for literally all different purposes, from microchip manufacturing to artist creations, whereas only a small portion of these commercially available systems have been exploited by surgeons. Although many of the technologies are large and sophisticated image-guided systems that provide precise targeting at the molecular and atomic level, numerous other technologies are small, hand-held portable systems. Thus, many time-honored surgical procedures will be performed as outpatient or office procedures with small, hand-held directed energy devices. Within the full spectrum of energy, one of the best opportunities is in photonics, with numerous existing and emerging technologies that are being accepted by the clinical realm. Even as laparoscopic surgery matures, and the fourth revolution in surgery in 25 years (robotic surgery) is gaining in popularity, a much more disruptive change is beginning with the next revolution: Directed energy for diagnosis and therapy (DEDAT). This advance takes the minimally invasive surgery (MIS) to the final step - non-invasive surgery. Building upon the success of MIS, and combining experience in lasers, photo-biomodulation, image guided surgery and robotic surgery, there are new energy-based technologies which provide the control and precision of photonic energy to begin operating (non-invasively) at the cellular and molecular level. The evidence that has been building from the multiple disciplinary field of photonics, computer assisted surgery, genetic engineering and molecular biology communities (Radiology, Surgery, Plasma Medicine, Molecular Biology, the Human Genome) will be presented, and includes additional technologies beyond photonics such as high-intensity focused ultrasound (HIFU), terahertz imaging and therapeutics - to name a few. Though still in its infancy, DEDAT presages the emergence of the non-invasive approach to medicine and surgery with these pioneering techniques, which are but the tip of the iceberg that heralds the transition to non-invasive surgery. Such systems are based upon the premise which directed energy, robotics and biomolecular technologies can bring - precision, speed and reliability - especially as surgery 'descends' into operating at the cellular and molecular level. Nobel Laureate Richard Feynman was right - there is "plenty of room at the bottom" !
Prior academic positions include Professor of Surgery at Yale University and a military appointment as Professor of Surgery (USUHS) in the Army Medical Corps assigned to General Surgery at Walter Reed Army Medical Center. Government positions included Program Manager of Advanced Biomedical Technology at the Defense Advanced Research Projects Agency (DARPA) for 12 years and Senior Science Advisor at the US Army Medical Research and Materiel Command in Ft. Detrick, Maryland, and Director of the NASA Commercial Space Center for Medical Informatics and Technology Applications at Yale University. Upon completion of military career and government service he had continued clinical medicine at Yale University and University of Washington. His undergraduate training was at Johns Hopkins University, Medical School at Hahnemann University of Philadelphia, Internship at the Cleveland Clinic, Surgical Residency at the Mayo Clinic, and a Fellowship with a Master of Surgical Research at Mayo Clinic. He has served on the White House Office of Science and Technology Policy (OSTP) Committee on Health, Food and Safety and was also awarded the prestigious Department of Defense Legion of Merit and Department of Defense Exceptional Service medals as well as awarded the Smithsonian Laureate in Healthcare. He has been a member of numerous committees of the American College of Surgeons (ACS), currently serving on the ACS-Accredited Education Institutes (ACS-AEI). He is a Past President of the Society of American Gastrointestinal Endoscopic Surgeons (SAGES), the Society of Laparoendoscopic Surgeons (SLS), and the Society of Medical Innovation and Therapy (SMIT). He was a member of the National Board of Medical Examiners (NBME) and is currently on Board of many surgical societies and on the editorial board of numerous surgical and scientific journals, and active in a number of surgical and engineering societies. Dr. Satava was the surgeon on the project that developed the first surgical robot, which later became the DaVinci Surgical Robot. In addition, as a government official, he funded all of the surgical robot development for the first 10 years, until commercialization became possible. For 5 years he was a member of the Advisory Board of the National Space Biomedical Research Institute (NSBRI) advising NASA in surgical robotics, advanced biometric sensing and other life science research for astronauts. Now Dr. Satava has added being continuously active in surgical education and surgical research, with more than 200 publications and book chapters in diverse areas of advanced surgical technology, including Surgery in the Space Environment, Video and 3-D imaging, Plasma Medicine, Directed Energy Surgery, Telepresence Surgery, Robotic Surgery, Virtual Reality Surgical Simulation, Objective Assessment of Surgical Competence and Training and the Moral and Ethical Impact of Advanced Technologies. During his 23 years of military surgery he has been an active flight surgeon, an Army astronaut candidate, combat tours of duty as MASH surgeon for the Grenada Invasion, and a hospital commander during Desert Storm, all the while continuing clinical surgical practice. Current research is focused on advanced technologies to formulate the architecture for the next generation of clinical Medicine and Surgery, education and training.
Currently, multiple unmanned aerial vehicles (UAVs) with embedded sensor and communication devices are attracting growing interest. This is motivated by growing number of everyday available civil and commercial UAVs and their applications. One of the core problems in an unmanned system is navigation which is formulated as driving vehicle safely from one place to its goal without colliding with other obstacles. The navigation is often decomposed into two problems: global path planning and local collision avoidance. Global path planning generates a set of waypoints, from an initial position to a final goal point, passing through obstacles in a working space, while local collision avoidance takes a given waypoint assignment as a local goal to avoid obstacles. In this talk, we first review the related techniques in collision avoidance, grouping the types of existing works and discussing the main features in the technologies. Subsequently, we focus on a distributed collision avoidance algorithm which is proposed in a multi-UAV system. The basic idea is to use the cooperative control concept to generate heartbeat message, where multi-UAV communication is used to exchange UAV information and the fusion technology is used to merge them. With the heartbeat message fused, the own UAV is to select the velocity command to avoid only those UAVs or obstacles which are within a certain range around the own UAV. The velocity obstacle algorithm is adopted for collison avoidance control. This control is in a distributed form and each UAV independently makes its own decision. Finally, in this talk, we will show the flight test of the proposed method implemented on several real UAVs.
Sunan Huang received his Ph.D degree from Shanghai Jiao Tong University, 1994. He is currently a senior research scientist in Temasek Laboratories, National University of Singapore. He has co-authored several patents, more than 120 journal papers, and four books entitiled "Precision Motion Control" (2nd Edition) which has been translated into Chinese, "Modeling and Control of Precise Actuators", "Applied Predictive Control" and "Neural Network Control: Theory and Applications". He is a co-recipient of several Best Paper Awards and a recipient of Supervisor Recognition Award for Champion Team (TSS 1st Singapore Engineering Design Challenge) in 2005. He also received 2017 Outstanding Reviewer Award for his contributions on Mechatronics Journal, Elsevier. He is also a Reviewer Editor, Journal of Frontiers in Robotics and AI, a member of the Editorial Board, Journal of Robotics, and an Editorial Board Member, Journal of Advances in Mechanical Engineering.
Fuzzy Theoretic Deep Learning
University of Rostock
Modern medicine relies profoundly on the application of high technology, where information technology emerges as a critical complementary area that significantly contributes to the development of medical science and medical practice. Digitalization is unavoidable direction in the evolution of modern medicine. However, we are at a turning point of the development of IT, a changing from virtual to cyber-physical. Therefore, it is necessary to adopt new approaches and new understanding of science and culture: from "data driven" to "AI things driven" concept. This means that we are entering the era of the new technological revolution, which is characterized by “smart things”, actually by the systems that convert information into action. Such systems are commonly referred to as robots. The use of robots in medicine seems like a huge potential for improving various technically and/or physically demanding medical procedures and specific skills that physicians must possess in addition to theoretical and experiential knowledge in the scope of his profession. However, the use of robots in medicine, despite numerous tangible benefits, is faced with a number of scientific and technical challenges. The presentation will highlight the main directions of the development of medical robotics, possible advantages of application, as well as problems. Special attention will be given to the Croatian projects RONNA and NERO - robotic systems for stereotactic neurosurgical operations, as well as its clinical application.
Dr. Bojan Jerbić is a full professor at the Faculty of Mechanical Engineering and Naval Architecture, University of Zagreb, Department for Robotics and Automation of Manufacturing Systems. He teaches courses in the field of automatic system design, intelligent robotics and engineering computing.His scientific research is in the field of artificial intelligence in robotics, especially behavioristic problems. He is engaged in the research and development of intelligent control models for multi-agent robotic systems, medical robotics, interaction of robots and humans and robotic models of consciousness. Prof. Jerbić conducted his doctoral studies in the United States at Florida State University. He has published more than 100 scientific papers and three books, participated in ten research projects and more than 30 industrial projects. He has held numerous invited lectures at home and abroad.
Autonomous navigation has earned its place in the mainstream with many autonomous cars promising to deliver a payload to the desired destination. Most of the contemporary systems, however, heavily rely on well marked roads and struggle to navigate even in suburban environments. Rural areas, back roads and disaster rescue environments often do not provide sufficient visual queues for the guidance of conventional systems, and cannot guarantee the traversability of the existing road networks. The commonly accepted solution in such conditions is the use of GNSS. Many mountain and forest roads have very low GNSS coverage or encounter loss of signal and the precision of the sensor is often too coarse to keep within a traversable area. We present a monocular system that is capable of following ill defined roads and traversing rough-terrain environments based on topological instructions. The main goals of the system are robustness of navigation and a human-friendly route specification environment, which are coupled with an adaptive execution scheduler. The interaction with the system has to be as effortless as possible, as such the operator can specify the desired route using a limited vocabulary. The route description combined topological and metric queues, replicating natural route specification paradigms. A catadipotric omnidirectional camera is used to provide the position of the road or orientation of the vehicle during operation. Concurrent multi-modal detection handles transition states and provides reliable detection in unknown environments. This research was partly funded by Defence and Security Accelerator.
Marek Ososinski has completed his PhD from Aberystwyth University, Wales where he continued his postdoctoral work. He is a research developer at Tadano Ltd. a global manufacturer of lifting equipment. He has worked on Mars Terrain Simulator for ESA3 Exomars mission and GPS denied autonomous navigation for the Defence sector.
Daily living phenomena have difficulty in their diversity, dispersiveness, dimensionality (3D's) in contrast to other physics phenomena when we are developing science and technology for them. Diversity indicates that person's physical and mental capability are changing and diversified when getting old. Dispersiveness stems from the fact that necessary data for living science are distributed in multi-organizations. Dimensionality is a problem in implementing technology into actual field sites; we cannot know which variables are important or ignorable before implementation. However, recent development of IoT and artificial intelligence allow us to solve these 3D's problem and develop useful living solutions. This talk describes smart living labs as a new approach to development of living support technology for children and elderly who have the changes in cognitive and physical capability by leveraging cognitive, connective, and complexifying (3C's) functions of the smart living labs. The authors developed 1) a buttery-less shoe-shaped wearable sensor, 2) RGB-D cameras with face and posture recognition, and 3) handrail-shaped sensor as "cognitive" technology. By embedding these cognitive technologies into "connective" real-world living labs such as a children's hospital, a rehabilitation hospital, nursing homes, and ordinary homes, we have accumulated a large amount of real living database. The authors released behavior video library to the public on March 2018 as one of living database. The behavior video library is video-based database on how elderly with dementia use consumer product. The video library allows manufacturers and service providers to understand the relationship between the degree of dementia and consumer product use. The smart living labs are also utilized for "complexifying" actual field sites: conducting implementation science of new technology through mutual interaction and coevolution among users, practitioners, and developers. Trade-off between quality of service and exposure of individual information, and acceptability of new technology are investigated.
Yoshifumi Nishida is a Team Leader of Living Intelligence Research Team at the Artificial Intelligence Research Center (AIRC) of the National Institute of Advanced Industrial Science and Technology (AIST) in Japan. He received a PhD from the Graduate School of Mechanical Engineering, the University of Tokyo in 1998. In 1998, he joined Intelligent System Division of Electrotechnical Laboratory at AIST of the ministry of international trade and industry (MITI), Japan. In 2001, he joined Digital Human Laboratory. In 2003, he joined the Digital Human Research Center (DHRC) of AIST. He is also a Prime Senior Research Scientist at AIST from 2013. His research interests include human behavior sensing, human behavior modeling, injury prevention engineering, and social participation support. He is a member of the Robotics Society of Japan, and the Japanese Society of Artificial Intelligence. He received the best paper awards from the Robotics Society of Japan, the Japan Ergonomics Society, and the Information Processing Society of Japan.
Objectives The da Vinci system has been used by a variety of disciplines for laparoscopic procedures but the use of robots in vascular surgery is still relatively unknown. The feasibility of laparoscopic aortic surgery with robotic assistance has been sufficiently demonstrated. Our clinical experience with robot-assisted vascular surgery performed using the da Vinci system is herein described. Methods Between November 2005 and September 2018, we performed 437 robot-assisted vascular procedures. 291 patients were prospectively evaluated for occlusive diseases, 111 patients for abdominal aortic aneurysm (Fig.), 5 for a common iliac artery aneurysm, 9 for a splenic artery aneurysm, 1 for a internal mammary artery aneurysm, 8 for median arcuate ligament release, 8 for endoleak type II treatment post EVAR, 2 for renal artery reconstruction and two cases were inoperable. 5 hybrid procedures in study were performed.
Many studies are devoted to the functioning of autonomous mobile objects (MO) in vague and nondeterministic environments. The choice of methods that are consistent with the state of the environment is often heuristic. In complex environments, the combination of different methods can lead to a significant, up to 50% increase in the quality of performance, while in simple environments leads to a slight, about 10%, change in this indicator. However, the concept of the complexity of the environment today is largely intuitive, which does not allow us to formalize the choice of this or that approach to traffic planning. In this paper, a measure of complexity characterizing the possibility of a successful passage of a medium with obstacles is introduced and justified. This measure is calculated on the basis of data from the MO vision system. The informal formulation of the problem is as follows. The MO moves on a flat surface. The task of the object is to get to the target point. Environment parameters such as the number, position and size of obstacles are not known in advance, but can be evaluated in the process of real-time MO operation. Depending on these parameters, the complexity of achieving the goal by the mobile object will be different. The MO must evaluate the complexity of the environment and decide on the use of a particular behavior strategy corresponding to this complexity. Predefined requirements of non-negativity and normalization of the complexity measure are postulated. Equality to zero corresponds to the maximally simple environment, and equality to unit corresponds to the most complex environment in which it is impossible to realize the movement towards the goal. To obtain a measure of complexity, Delaunay triangulation is used with nodes located in the center of obstacles. The resulting partition of the scene is described with the of a topological graph. As a result, the points determining the location of the object and the target will be identified with certain vertices of the graph. A possible transition on a graph from one vertex to another corresponds to moving an object from one region of the partition to another one through neighboring regions. In this case, the object will cross an imaginary segment connecting the centers of neighboring obstacles. The concept of the width of an edge that induces a function on the edges of a graph is introduced. Then, this function extends to the set of all paths on the graph, namely, the width of the path is equal to the smallest width among all the edges that form this path. Under the passability of the whole environment from the mobile object to the target, we mean the maximum width among all paths connecting the corresponding vertices. To calculate all possible passabilities, a procedure based on the max-min composition of the graph adjacency matrix is described. The passability of the environment as a characteristic of the complexity of the scene is inconvenient, since it is expressed in absolute units of distance, i.e. depends on the scale of the measurements, and does not satisfy the normalization requirement. Therefore, an expression is proposed for the measure of complexity that is devoid of the indicated shortcomings. It is shown that, as a measure of complexity, one can take the solution of some differential equation that satisfies the requirements and is a function of passability. This expression is called the local measure of complexity, i.e. complexity with respect to the fixed position of the target. The work introduces and investigates also an integral measure of complexity that takes into account all potentially possible positions of the goal. The necessity of such a measure arises, for example, if the coordinates of the goal are unknown beforehand, but are communicated to the object in the process of its displacement. The results of the simulation are presented. Possible generalizations of the obtained results are also considered.
Alexander N. Karkishchenko has completed his PhD from Taganrog Radio Engineering Institute, Russia, and postdoctoral studies from Taganrog State University of Radio Engineering, Russia. He is a Professor in Southern Federal Uneversity, a Leading Researcher in Research and Development Institute of Robotics and Control Systems. He has published more than 35 papers in reputed journals and has been serving as an editorial board member of repute.
The last decade, Neural Network (NN) became a powerful tools for applied studying and creating numerical models of low-dimensional (non-linear) manifolds with an unknown structure. We will introduce an algorithm that improves NN classification/registration algorithms when data samples are corrupted. The algorithm combines ideas of the Orthogonal Greedy Algorithm (or wider ideas of error correcting codes), essentially related to linear manifolds, with the standard gradient back-propagation engine incorporated in NNs as a training tool. Therefore, we call it the "Greedient" algorithm. Essentially, this algorithm is an extension of sparse representation recovery with an "orthogonal greedy" approach. Then we will discuss the greedy approach for training NNs when training data are unreliable, i.e., they have both missing and corrupted entries in the labeled training set. In particular, we show that this problem is just an extension of low rank matrix completion problem to non-linear settings.
Artificial intelligence drives technological transformation in a manner that empowers human lives. The influence of artificial intelligence in enterprise and industry is pervasive as seen in industry 4.0. Industry 4.0 leverages on advances in the domain of robotics, artificial intelligence and data driven technologies. The increased ease of processing data makes the deployment of machine learning based solutions feasible. For example, cloud platforms enable the development of multi - platform based machine learning applications. The joint consideration of robotics and artificial intelligence enables the development of novel solutions for existing problems. An important aspect in this regard is that of sub - marine exploration. Robotics research is focused on designing technologies that enable the monitoring of remote environments and hard to reach locations. An example of such a location is the sub - marine environments. Industrial initiatives by Microsoft as seen in its Microsoft Natick project aims to site data centres in the ocean's depths. The launch of such an application requires the conduct of ocean surveys to identify suitable locations for siting underwater data centres. Currently, such data can be obtained via the conduct of expensive ocean bathymetry surveys. In this context, the combination of satellite based bathymetry and low cost ocean surveys prove useful. Low cost ocean surveys can be realized by developing low cost intelligent marine drones that can rapidly survey a specific portion of the ocean. This is done with the aim of determining the suitability of the location of the ocean for information and communication technology applications
Ayodele Periola has completed his PhD in Electrical Engineering from the University of Cape Town, South Africa. His research focuses on applying artificial intelligence to improve access to wireless technologies and applications. This is necessary to increase technology adoption in contexts when capital constraints pose a significant barrier. In this regard, he has published research articles aiming to increase reducing the cost of conducting radio astronomy surveys, designing earth observation systems. He has proposed solutions that improve subscriber quality of service derived from wireless communications. His research findings have been presented in IEEE Conferences and Journal Articles.
We focus on developing reinforcement learning techniques in online mode with the linear value-function approximation approach. An online version of DCA (Diﬀerence of Convex function Algorithm), an innovative and power approach of nonconvex optimization for minimizing the squared l2-norm of empirical Optimal Bellman Residual is investigated. We design an online DCA-based algorithm which enjoys the online stability property. By exploiting the knowledge of sample at each online step, we propose an alternating version of this algorithm where the value function is alternatively updated. To improve the performance of the proposed algorithms, we combine them with a basis eligibility-trace mechanism of reinforcement learning. Numerical experiments on two benchmarks - the mountain car and pole balancing problems - show the eﬀectiveness of our methods in comparison with some standard Reinforcement Learning algorithms.
LE THI Hoai An obtained his PhD with Distinction in Optimization in 1994, his Habilitation in 1997 from university of Rouen, France. She is currently serving as Full Professor Exceptional Class, university of Lorraine. She is the author/co-author of more than 230 journal articles, international conference papers and book chapters, the co-editor of 20 books and/or special issues of international journals. She has been president of Scientific Committee and president of Organizing Committee and member of Scientific Committee of various international conferences, and has been heading of several research projects. Her research interests include Machine Learning, Optimization and Operations Research and their applications in Information Systems and Industrial Systems. She is the co-founder (with Pham Dinh Tao) of DC programming and DCA, an innovative approach in non-convex programming.
There are various methods have been proposed for Pattern classifications with quantum neural networks. Mostly these methods are employing the Grover's iteration on Bell's MES in two-qubit system. Further has been demonstrated that for any pattern classification in a two-qubit system the maximally entangled states of Singh-Rajput eigen basis provide the most suitable choice of search states and in no case any of Bell's states is suitable for such pattern classifications. Here in this present work we are employing the quantum perceptron architecture which incorporates entanglement of weights and states both for producing the required pattern classification. The quantum perceptron learning rule is presented to train the network for the given training set and convergence and normalization of weights have been observed. The simulation results show that the proposed quantum perceptron neural network is capable to classify all the kinds of patterns whether the patterns are linearly separable or not.
Global Status Report on safety (in 2015) of the World Health Organization estimated that more than 1.25 million people die each year due to road traffic crashes and it is predicted to become the seventh leading cause of death by 2030. It is also noted approximately 90% of accidents being occurred due to human errors. Automated driving will play a vital role in significantly reducing road traffic crashes and ultimately save human lives. Designing automated driving car is one of the most difficult and most challenging tasks for a researcher in the automotive field. Automated driving shows a high-stakes test of the computational abilities of machine learning algorithms. In this article, we describe the architecture that is under development in the framework of the autonomous driving. We show the utility of Genetic Algorithm and Neuro-Symbolic Reasoning approach for action selection in autonomous vehicles. Our system utilizes bio-inspired mechanism during simulations of tasks through hierarchal perception-action learning. Perceptions are highly constrained by external stimuli and do not produce a perfect replica of the world. Hence, dream like simulation mechanism is used that will work in the absence of external stimuli for the goal directed scenarios to explore alternative paths. In recent years, increasing enthusiasms have been seen in the amount of industrial and research activities aimed at implemented automated driving. However, the main challenge that has yet emerged is how to demonstrate that engineered vehicles are safer than humans. This paper provides an insight of bi-inspired mechanism conjunction with machine learning algorithms for autonomous vehicles
Hari is Lecturer in Computer Science at Edge Hill University, UK. He is specialized in Computer Science & Engineering. His research area includes artificial intelligence, soft computing techniques, natural language processing, language acquisition and machine learning algorithms. He is author of various books in computer science engineering (algorithms, programming and evolutionary algorithms). He has published over 50 scientific papers in reputed journals and conferences, served as session chair, leading guest editor and delivered keynotes. He has been given the prestigious award "The Global Award for the Best Computer Science Faculty of the Year 2015" (http://asdf.international/hari-mohan-pandey-asdf-global-awards-2015/), award for completing INDO-US project "GENTLE", award (Certificate of Exceptionalism) from the Prime Minister of India and award for developing innovative teaching and learning models for higher-education. Previously, He was working as a research fellow machine learning at Middlesex University, London where He worked on a European Commission project- DREAM4CAR. His role was to research and develop advanced machine learning techniques relevant to the project goals and to evaluate these on both project & reference data sets. To lead and manage relevant work packages in support of Project DREAMS4CARS, ensuring appropriate interfacing with partners. To lead a Work Package and to work with those of research partners; to carry out individual and collaborative research relevant to the project. To contribute to the development of software according to project specifications. To produce research reports and deliverables related to the project. To undertake work package leadership and general administrative tasks to ensure the smooth running of the project. To coordinate with research partners and stakeholders related to the project with immediate responsibility for the work package.
Artificial Intelligence (A.I.) is one of the most controversial technological advances of the 21st century, due to its impact on employment and the workforce, on decision-taking, and even on human behavior and emotions. As far as employment is concerned, it has been predicted that a large number of jobs (e.g. 47% in the USA.) will be automated. So, one of the issues that arise is how people whose jobs will be taken over by robots will react. What can we be taught by the Luddite uprising in England dating back to the 19th century? Was it really a fight against automation or was it one against the logic of industrial capitalism? Where the Luddites just demanding an equal share of the economic profits? Could today's society be faced with a similar situation? Is it prepared and willing to adapt and take advantage of equitable benefit-sharing? Can it overcome the challenges and drawbacks of artificial intelligence? There is no doubt that for the first time in the history of humankind, one of the creations of human intelligence could prove to be even smarter than humans. Is society psychologically ready for this new reality? Will these new robots be regarded as assistants, colleagues with equal rights, or as antagonists or even enemies displacing human labor? Who will design and program them or have access to their applications? Will the use of A.I. be well-intended? What will be the cultural, religious, professional, political and financial background of its creators? These are questions that need to be answered, or at least discussed, not only by the scientists who contribute to their creation, but also by scientists of other disciplines as well, such as sociologists, psychologists and anthropologists, not to mention law experts and intellectuals. Overall, they should be objective and public-spirited personalities with high moral standards.
Mrs. Vassiliki S. Stavropoulou is an Assistant Professor at the Technological Educational Institute of Central Greece, where she has been teaching Technical English Terminology for the last twenty years. Also, for the last five years, she has been teaching Business Administration for students of Engineering, and more specifically the subjects of "Entrepreneurship" and "Man and Science" for the Automation Department. She has studied English and Greek literature at the National and Kapodistrian University of Athens and she holds a Master of Arts in TESL from St Michael's College, Winooski, Vermont USA. She has published four books, the most recent of which are: English for Avionics, Electrical Systems & Navigation, Student's Book & Workbook, Modern Publishing Ltd, Athens, 2013; English for Mechanical Engineering, Student's Book, Modern Publishing Ltd, Athens, 2009; Study Guide on Mechanical Engineering Terminology, Modern Publishing Ltd, Athens, 2009.
This talk will describe possible approaches to achieving human-level artificial intelligence. Artificial intelligence began in roughly 1956 at a conference at Dartmouth University. The participants, and many researchers after them, were clearly overly optimistic. As with many new technologies, the technology was oversold for many decades. Computers at the time could only do about 16,000 operations per second. Computer processing power, however, has been doubling every two years thanks to Moore's law, and growing even faster due to massively parallel architectures. Finally, 60 years after the first AI conference we have computers on the order of the performance of the human brain (1016 operations per second), even if they are a million times less efficient (in terms of power and space) than the human brain. The main issues now are algorithms, software, and learning. We have excellent models of neurons, such as the Hodgkin-Huxley model, but we do not know how the human neurons are wired together, or how carefully we need to match brain architecture, but human-brain scale simulations are now feasible on massively parallel supercomputers. With careful attention to efficient parallel computing, event-driven programming, table lookups, and memory minimization these simulations can be performed. Artificial consciousness and emotions will also be possible.
Dr. Lyle Long is a Professor of Aerospace Engineering, Computational Science, Neuroscience, and Mathematics at The Pennsylvania State University. He has a D.Sc. from George Washington University, an M.S. from Stanford University, and a B.M.E. from the University of Minnesota. In 2007-2008 he was a Moore Distinguished Scholar at the Caltech. And has been a visiting scientist at the Army Research Lab, Thinking Machines Corp., and NASA Langley Research Center; and was a Senior Research Scientist at Lockheed California Company. He received the Penn State Outstanding Research Award, the 1993 IEEE Computer Society Gordon Bell Prize for achieving highest performance on a parallel computer, a Lockheed award for excellence in research and development, and the 2017 AIAA Software Engineering award and medal. He is a Fellow of the American Physical Society (APS), and the American Institute of Aeronautics and Astronautics (AIAA). He has written more than 260 journal and conference papers.
For robots to be useful in the real world, they need to understand what is going on around them and interact with humans. We present the CART system, a government (DARPA) funded project in collaboration with the Massachusetts Institute of Technology (MIT). The goal of the CART project is to produce robots that can assist humans in performing repair operation. CART observes what the human repair person is doing, and understands it in the context of a repair mission. A biologically inspired model of attention allows the robot to pay attention to what is importanty while avoiding distractions from other activity nearby. The robot can discuss verbally, using natural language, the repair activity with the human repair person and can warn the human of mistakes and recovery options. The CART systems depends upon a continuous understanding of goals, intentions, causality, and common sense We will demonstrate the robot working with a human repair person performing a simple repair operation. We will explain the model of attention, and discuss the tracking of key objects throughout the mission.
Dr. Paul Robertson has completed his D.Phil from University of Oxford, UK. He is the chief scientist of DOLL Inc., a premier Artificial Intelligence and Robotics research organization. He has published numerous journal and conference articles, served as program chair for a major conference and served on numerous program committees.
Medical robotic devices (MRDs) have taken relevant participation in all aspects of the modern medical diagnosis and treatments. MRDs have contributed in developing more efficient rehabilitation therapies. In particular, diverse active MRDs such as orthosis and prosthesis which can mobilize and evaluate the advances of the medical therapy appeared along the last decades. The application of MRDs in rehabilitation is now recognized as a principal researching area in the field of robotics. Indeed, rehabilitation robotics is recognized as an interfacial discipline in the field of engineering and medicine. The more advanced robotic orthoses use the electrophysiological signals information to enforce the active participation of the patient in his/her rehabilitation. However, the adequate application of electrophysiological information demands the application of signal classifiers based on artificial intelligence methods. However, the time-dependence, nonlinear settings and non-stationarity of electrophysiological signals introduce major challenges in classifying the patterns contained within them. These facts have motivated the application of time-variant signal classifiers based on continuous pattern classifiers based on artificial neural networks. This talk reviews several applications of differential neural networks (DNN) that have served as key elements in regulating the activity of several robotic orthosis of lower and upper human limbs, neck, back, hand, etc. This study discusses the entire process of designing, instrumenting, controlling and evaluating the diverse orthoses. Even more, the application of DNN as pattern classifiers, autoencoders, identifiers and soft sensors is detailed considering the task of rehabilitating diverse sections of the human body
Dr. Isaac Chairez has completed his PhD from Centro de Investigacion y Estudios Avanzados (CINVESTAV) from National Polytechnique Institute, Mexico. He is the director of the Medical Robotics and Biosignal Processing Laboratory at the Professional Interdisciplinary Unit of Biotechnology, National Polytechnique Institute. He has published 1 book, 102 referred papers in reputed international journals, 350 papers in international congresses and has been serving as an editorial board member as well as reviewer of several journals associated to the application of the automatic control and artificial intelligence theories. His current interests include the application of automatic control theory in biomedical sciences, biotechnological processes and microtechnological systems.
In the pursuit of machine intelligence, "Learning", i.e. computer algorithms improving automatically with the experience, is central to the research on artificial intelligence and machine learning. In today's era of Information Technology, there is an abundance of data being generated by different sources. The development of computations based methodologies for learning data representation remains as the focus. The data inherit uncertainties from their sources and thus prompting the researchers to apply fuzzy theory in machine learning as means to handle uncertainties. Despite numerous research studies on Artificial Intelligence and Machine Learning, some of the fundamental issues that remained unaddressed are: 1. A mathematical theory is not available to study the propagation of non-statistical-uncertainty across the layers of a deep model by means of fuzzy membership functions. 2. There does not exist an analytical solution to the learning of a deep model where fuzzy membership functions quantify the uncertainties on variables and nonlinear functions associated to layers of the deep model. 3. The gradient-descent based learning of deep models requires a large time as a large number of model parameters are being estimated using ad-hoc iterative numerical algorithms. 4. The state-of-art, in the context of nonparametric deep learning, doesn't provide a fuzzy theoretic framework as elegant and powerful as the Bayesian framework in statistics and machine learning. We have introduced a novel fuzzy theoretic approach to machine learning that addresses the aforementioned issues. A complete analytical solution to the learning of a deep model is provided. As the analytical approach allows to mathematically take into the uncertainties on variables and nonlinear functions associated to the deep model, a fast learning algorithm can be designed resulting in the building of robust deep models that should perform remarkable better than the classical machine learning methods and convolutional neural networks.
Mohit Kumar is apl. Professor for "Computational Intelligence in Automation" at Computer Science and Electrical Engineering Faculty of University of Rostock, Germany. He is the director of "Big Data and Machine Learning Research Center" at Binhai Industrial Technology Research Institute of Zhejiang University, China. He has more than 15 years long experience while working in the field of fuzzy machine learning. More information about his research works is available at https://www.fuzzycomputing.com.
Cybercriminal activities in the dark web can be considered one of the critical problems for societies around the world. It is possible that cybercriminals are using the Internet for criminal activities such as: trading and buying drugs, pedophilia, hiring hitmen, forgery, piracy and terrorism. In this paper, we are high-lighting the illegal activitieshappening on the dark side of the Internet, in particular the dark web forums. We used the Onion Router to check the web contents that users of dark web forums are posting and discussing. This paper also examines the most common activities in the illicit subjects that users are chatting about in the dark web forums. We performed affect analysis on the dark web forums, which is useful for measuring the presence of illicit subjects such as drugs, violence, forgery and piracy. The results show that there are similarities in the subjects between the two most popular dark web forums.
Hussein AlNabulsi is a PhD student in computer faculty at Charles Sturt University/ Albury, Australia, he is interesting in computer security research. He has a master degree at computer engineering from Yarmouk University/Jordan. AlNabulsi has been published 5 conference papers, 1 Journal paper, and 2 book chapter.
Autonomous Solutions Inc. (ASI) has been fielding autonomous ground vehicles in markets like agriculture, automotive, mining, security, and cleaning for 18 years. There are many challenges and correlated opportunities in successful wide scale deployment of these solutions. Collecting the data of these IOT connected equipment and putting it into the cloud enables data mining, machine learning, and artificial intelligence approaches to maximize the potential of success and the increase of effectiveness never before possible. Mel torrie will share their experiences to date and how they are using these technologies to successfully change the world through automation.
Mr. Torrie has a master’s degree in Electrical Engineering from Utah State University and is founder and CEO of Autonomous Solutions Inc. (ASI) in Logan, Utah. ASI develops and sells systems for driverless ground vehicle control in Mining, Military, Agriculture, Material Handling, Automotive Proving Grounds, Industrial Cleaning and Security. Prior to founding ASI 16 years ago, Mr. Torrie worked at Utah State University where he managed two NASA Space Shuttle payloads. Mel is a sought after speaker in the robotics community and has delivered addresses at conference events such as the Precision Farming Expo, RoboBusiness, The Canadian Institute of Mining, Optimizing Mining Operations, Robotics Alley, and Rise Hong Kong.
In this talk, I will discuss a framework that combines data analytics, machine learning and optimization to solve real-world complex problems effectively. In particular, I will show how spatial-temporal data can be used to derive predictive models that will in turn be used to derive optimal plans. I will illustrate with two real world problems: (1) in public safety, where crime data can be used to predict crime occurrence which in turn can be used for police officer staffing; and (2) in maritime traffic safety, where AIS (Automatic Identification System) data can be used to predict vessel movement trajectories, which in turn can be utilized to provide timely advice for vessels that enhances safety as they navigate through the Singapore Strait.
Hoong Chuin LAU is Professor of Information Systems and Director of the Fujitsu-SMU Urban Computing and Engineering Corp Lab at the Singapore Management University. His research in the interface of Artificial Intelligence and Operations Research has contributed to advances of algorithms in a variety of complex planning and optimization problems in logistics, transportation, and safety and security. The common thread running through his research is a focus on going beyond publications to build usable novel tools and prototypes, a number of which have been testbedded and deployed in industry. He has served on a number of editorial boards, including IEEE Transactions on Automation Science and Engineering, Journal of Heuristics, and Journal of Scheduling, as well as senior programme committees in AAAI, IJCAI and AAMAS.
Robotic manipulators have been extensively adopted in numerous fields to satisfy the growing application demands of dexterity, efficiency, and automation, such as missions like the on-orbit-servicing and active debris removal. Capture of targets with high accuracy and autonomy attracts increasing attention in robotics, especially in space applications when the targets are non-cooperative. This paper presents the concept and experimental results of a kinematics-based incremental visual servo control approach for robotic manipulators with an eye-in-hand configuration to capture non-cooperative spacecraft autonomously. The vision system is adopted to estimate the real time position and motion of the target by an integrated algorithm of photogrammetry and the adaptive extended Kalman filter. The unknown intercept point of trajectories of the target and the end-effector is dynamically predicted and updated based on the target estimates and is served as the desired position of the end-effector. An incremental control law is developed for the robotic manipulator to approach the dynamically estimated interception point directly. The proposed approach is validated experimentally by custom built robotic manipulator. The experimental results show that the predicted minimum tracking time is reduced asymptotically as the end-effector approaches the target, which demonstrate the proposed control scheme is effective and reliable. The advantages of the proposed control approach are: it does not require a robotic dynamic model that most of the existing robotic control based on; it avoids the multiple solution problem in the inverse kinematics; it is insensitive to system uncertainties; and it is much easier for engineering implementation.
Zheng H. Zhu completed his B.Eng., M.Eng. and Ph.D. degrees in mechanics from Shanghai Jiao Tong University in China. He aslo received M.A.Sc. from University of Waterloo and Ph.D. from University of Toronto all in Canada. He is currently a professor, Tier 1 York Research Chair in Space Technology, and Department Chair at the Department of Mechanical Engineering, York University in Canada. He has published more than 240 articles with 126 papers in reputed journals. Prof. Zhu is the fellow of Engineering Institute of Canada, Associate fellow of AIAA, Fellow of CSME and ASME, and senior member of IEEE.
Diego Andina was born in Madrid, Spain, were he received simultaneously two Master degrees, on Computer Science and on Telecommunications by the Technical University of Madrid (UPM), Spain, in 1990. He achieved the Ph D. degree in 1995 with a thesis on Artificial Neural Networks applications in Signal Processing. He presently works for UPM where he heads the Group for Automation in Signals and Communications (GASC/UPM), a multidisciplinary research group interested in Signal Processing and Computational Intelligence applications: Man-Machine Systems and Cybernetics. He is author or coautor of more than 250 national and international publications, being director of more than 60 R+D projects financed by National Government, European Commission or private Institutions and Firms. He is also Associate Editorial Member of several International research Journals and Transactions, and has participated in the organization of more than 60 international Research, Innovation or Technology Transfer events.
Today big data or data science is emerge. The big data issue for big industries such as telecommunication, banking, healthcare and big educations environments such as research universities. We need big data analytic because it does reflect many high impact information needed in business and government and/or private sectors. The aim of this talk presenting reviewing elements of big data, current research work challenges issues in architecture design, store data and information/data retrieve. Big database types such as cloud, big data security, software engineering solutions and big data search engines reviewed. In addition, with the exponential growth of data collected and available, the need to properly sort and use that data efficiently arises. This poses new challenges to agencies and almost all businesses. These huge amounts of data known as big data, and this presentation covers the basic techniques used and real world applications and good usage of the data collected. The scenarios of usage are countless and almost every company has to deal with this problem. However, big data have other form of definition in term of quality and quantity. Thus, whether it needs to keep record of their sales, information about their customers or technical information. Therefore, big data techniques must have a reduction algorithms tools not compression techniques to obtain useful data.
Prof. Dr. Abdurazzag Ali Aburas received his Bachelor’s degree in Computer Sciences Tripoli University, Tripoli-Libya in 1987. He obtained his Master degree in Computer & Information Technology and PhD in Digital Image Processing from Dundee University and De Montfort University, UK in 1993 and 1997 respectively. He worked in Jordan and UAE universities, Electrical and Computer Engineering Department, faculty of Engineering, International Islamic University Malaysia and International University of Sarajevo. At present time, working at University of KwaZulu Natal, School of Engineering, South Africa. He was working as visiting full Professor at Tecnológico De Monterrey, San Luis Potosi Campus, Mexico. He has more than 50 publications in different international conferences and several papers in international journals. He has two research patents in Image processing filed. Recently, he gives consultation for IT Company as senior software developer. His areas of research interest are Big data/data science, Curriculum development, Digital Signal / image / video processing, Coding and Compression, Fractal and Image / Voice Pattern Recognition, Human Mobile Interaction, Algorithms. He is a member in IEEE, HCI-UK, ARISE and IMA Societies. He was a Member Board of Studies of Engineering Curriculum development (Review and Improvement). Dept. of Electrical and Computer Engineering, Faculty of Engineering, International Islamic University Malaysia. He introduced two new course curriculums for HMI (2015) and Programming for Engineering (2008). He is Coordinator of Cloud Computing and Mobile Research Group – CCMRG at present time and Coordinator of Software Engineering Research Group – SERG (2006-2009), IIUM. He has published new book based Engineering Education (2013) and Human Mobile Interaction (2015) and proposed publish his third book on Data Engineering in progress.
Industry 4.0 is a technological revolution to integrate information technology, communication, and control in a so-called Cyber Physical System. Main drivers of Industry 4.0 involve: big data analytics, cloud computing, artificial intelligence (AI), Internet of Things (IoT), robotics, cyber- security, augmented reality, autonomous vehicles and additive manufacturing (3-D & 4-D printing technology). Industry 4.0 uses digital twin concept to make future factories smarter. Robots have become an essential element in the industrial world. They have become competitor of human workers in many industrial branches. Recent research focuses on vision control study and human-robot interaction. Nowadays, smart factories became a fact in our world where the collaboration between robots and human operators (COBOTS) increases flexibility, productivity, and product quality. Cobots are slower and less powerful than traditional industrial robots because they have to work at the force and velocity of their human partner. The integration between IoT and intelligent robots will improve the industrial product quality. Market research shows that consumers want personalized products by adding their touch. It is expected that a future “Industry 5.0” will revolve around putting even more of the human touch back into products. In this talk, a brief introduction on Industry 4.0 will be presented. Smart robots using Artificial Intelligence (AI) in industrial applications will be presented. The interaction between smart robots and human will be highlighted. Different applications for smart manufacturing using intelligent robots will be demonstrated.
Prof. Wahied Gharieb Ali Abdelaal has completed his PhD degree from National University at Grenoble, FRANCE in 1994. He is the Dean of Egyptian Academy for Engineering and Advanced Technology, EGYPT. He is on loan from Ain Shams University, Cairo, Egypt. He worked in KSU, KSA from 2004 to 2013. He was awarded excellence prizes in teaching and research from KSU in 2008 and 2013. He has published 4 books, USA-Patent, and more than 60 papers in reputed journals and international conferences. He has been serving as an editorial an editor-in-Chief in Control Engineering, World Journal of Engineering and Technology.
In the recent time, there have been many global discourses on what AI will do and change how we live. Some people see a promising future but many people predict a dark future or even a doomsday. Of course, these scenarios depend on various conditions and how we define and chart the future of AI development. As any computer scientist knows it is all about definitions and conditions. At the moment, AI is becoming part of every aspect of what humans do. This embedment or integration is due to the rapid advancement and maturity of technologies and automation to maximize the production. We have transitioned from human power to horse power to mechanical power to machine power to robotic power in our manufacturing. As we make these transitions, productivity improved exponentially and they made transformational impacts on our economy and society especially in the labor market. This talk will focus on the impact of AI on manufacturing and human labor market and offer ideas about how humans can coexist with AI.
Dr. Dennis Anderson is Chairman and Professor of Management and Information Technology at St. Francis College, New York City. He also serves as Founding Executive Director of the Institute of E-government and Global Sustainability and the Center for Entrepreneurship. Prior to this appointment, he was a Professor of Information Systems and Associate Dean at Pace University. He also served as Founding Director of the University's Center for Advanced Media. He is a strong advocate of technology-enhanced learning, emerging technologies, sustainable technologies, and knowledge entrepreneurship. He has taught various business, information systems and computer science courses at NYU Courant Institute, City University of New York, and Pace University. He also serves as the Chairman of NABU - Knowledge Transfer Beyond Boundaries (Special Consultative NGO to the UN ECOSOC since 2015). Anderson received his Ph.D. and M.Phil. in Mathematics Education from Columbia University. He also received an Ed.M. in Instructional Technology and Media from Columbia University. In addition, Anderson holds an M.S. in Computer Science from New York University's Courant Institute of Mathematical Sciences and his B.A. in Computer Science from Fordham University. He also completed an executive-education program in E-Commerce at Columbia University's School of Business and a professional program in multimedia at the Massachusetts Institute of Technology. He is an alumnus of Harvard University's Institute for Management and Leadership in Education Program
In recent years, the use of autonomous mobile robots in the same living environment with people is progressing in various service industries such as cleaning and security. Outdoor environment is mainly mentioned as the future operational environment for such autonomous mobile robot. However, outdoor is different from the indoor environment state which has common physical characteristics and flat surface. Outdoor environmental conditions are characterized by physical properties with complicated shape and the correspondence of the robot to the complexity is an important issue. For example, there may be cases where the autonomous movement based on the environmental map centering on the wall surface may not be sufficient using existing wheel encoders and range measurement sensors. So, in this research, we aim at the realization of the environmental map including the information such as the object existing in the environment other than the wall surface and the road surface condition. Environment recognition based on camera image analysis by deep learning and conventional self-position estimation are combined to construct an environmental map incorporating object, road surface condition. We further propose route planning based on environmental map by verification experiment and autonomous movement.We confirm the effectiveness of route planning and autonomous movement based on the environmental map.
Minoru Sasaki received the B.E. degree in mechanical engineering from the Yamagata University in 1979, and the M. Eng. and D. Eng. degrees in mechanical engineering from Tohoku University in 1983 and 1985, respectively. He was a research associate at Tohoku University in 1985 and a lecturer at Miyagi National College of Technology, and a visiting professor at the University of California, Los Angeles. From 1993, he has been with Faculty of Engineering, Gifu University and is currently a professor. His current research interests include the control theory, mechatronics and robotics, vibration control, intelligent control and BMI. Now he has published more than 300 papers and got 14 items of awards.
A method of diagnosis used to detect the cause of symptoms of various illnesses. It enables the determination of the unknown internal characteristics of patients and external environmental factors affecting them, hence relieving or reducing certain symptoms. Frequency therapy is a cybernetic (regulatory) method, which has proved its effectiveness in various diseases. It is a painless method for diagnostic and treatment purposes. It deals with the hidden causes provoking disease and is free of harmful side effects. The expert of the therapy uses a special painless test method in order to find out if there are food intolerances, if certain organs are weak or if some toxins might have a negative influence on the body etc. In this way the hidden causes of the complaints can be detected. Since the method is testing the frequencies of the patients, the new way of testing is multiple channel wave testing, hence testing the frequency of complains along with the cause of the symptoms in the organs, and to test the frequency of food or medications effect. All channel tests running together at the same time. This method not only detects the cause of the pain or the tumors, but then the treatment uses the multiple channel frequency to set treatment program unique to results of every patient
Issa Salim has completed his Master degree in medical cybernetic at the age of twenty six years from Ilmenau University in Germany, and his PhD in monitoring the brain signals from Graz university in Austria. He is a Member of the Bioresonance international association.In addition, he was Deputy of World University Service in UN- Austria and the Director of electrophysiology EEG in Wurzburg Germany. He is the director of medical canter of Bioresonance in Rashid Hospital in DUBAI, He was awarded by Dubai’s Government (Excellence Program for the year 2010) In recognition of his outstanding, creative achievements and initiatives.
Robotic surgery has emerged as a leading surgical platform in minimally invasive surgery. Research has shown improvements in patient satisfaction, pain scores, length of stay, clinical outcomes and a host of other factors. Most robotic surgical programs have historically begun with urology and gynecology services due to slow initial adoption in the general surgery market. Recently, general surgery has seen one of the most rapid uptakes and acceptances of the surgical specialties. We describe the rapid implementation of a robotic surgery program using a general surgery platform as the leading surgical specialty. Additionally we discuss the development of a comprehensive robotic surgery program within an academic safety net-hospital which demonstrates unique challenges and opportunities. We describe the implementation of this unique multi-specialty robotic surgical program with development of educational curricula, outcomes database and rigorous quality review. Additionally, we discuss the implications of development of this program within the constraints of a large safety-net hospital and its applicability to other systems.
Dr. Shaneeta Johnson is an Associate Professor of Surgery, Director of Minimally Invasive, Robotic, and Bariatric Surgery, and Associate Program Director, General Surgery Residency Program at Morehouse School of Medicine, Department of Surgery and Grady Memorial Hospital in Atlanta, Georgia. Dr. Johnson is a Fellow of the American College of Surgeons, Fellow of the American Society of Metabolic and Bariatric Surgeons, and Fellow of the International College of Surgeons. She is board certified in General Surgery and Obesity Medicine. Dr. Johnson is an expert in the field of robotic minimally invasive surgery. She is a published author and presenter with manuscripts and presentations in the fields of obesity, robotic surgery, preoperative endoscopy and bariatric surgery.
One of the main goals of the modern development of universities in the world is to promote social, cultural and economic development of the society, in which these higher education institutions operate, through the creation, implementation, expansion, dissemination and use of new knowledge, establishing direct relationships with the region and all its components. This development was called the "Third mission of universities" in Europe. In the conditions of modern information development of society, where the rate of its development is determined by the conditions of exchange of new knowledge, which are often created in universities, it is necessary to develop education, science, which is the basis for the development of education based on projects aimed at solving real problems. It strengthens the position of educational institutions as the driving force of innovation and development of the region, and one of the main roles can be played by the network development of universities based on the intellectual analysis of data on the current state of the university. In this article we consider the use of several types of fuzzy models used in the researches of the interaction between University and region, providing an increase in its efficiency: neuro computing+fuzzy logic (NF) - to substantiate and develop a specialized fuzzy processor for relevant marketing researches; fuzzy logic+haos theory (FCh) - to study the stability and efficiency of complex systems with self-organization and neural networks+haos theory (NCh) - fuzzy neural networks with self-learning and adaptation to regional market dynamics.
Kramarov Sergey Olegovich defended the Candidate’s dissertation in 1979 and the Doctorial dissertation in physical and mathematical sciences in 1989. At present he is Chief researcher in Surgut State University. He is the author of more than 350 scientific works, such as 11 monographs, 29 patents and inventions, articles in international science journals. Under his scientific guidance seventeen candidates and one doctor defended their dissertations, number of scientific projects are realized, which results were the winners of competitions of international and Russian grants of different levels, included such as International Soros Fund, IREX Fund (USA), programs of TACIS, Fund of modern natural sciences, Fund “Dynasty” and others. Also Prof. Kramarov is Regional Coordinator of the company Casio in the South of Russia. In different periods of his scientific activities he was participant of number of science and expert councils and editorial boards of scientific journals. At present he takes part in editorial board of scientific journals, such as International scientific journal “Modern international technologies and IT education”, “Russian technological journal” and others. Science interests of Prof. Kramarov are in the field of researches of complex objects and systems and use of modern informational technologies. He often takes part in international conferences (The USA, Canada, The Netherlands, Portugal, Poland, China and others). Prof. Kramarov was awarded the number of state and public awards: “Inventor of The USSR”, by medal “Labore et scientia”, by medal of “A. Nobel” and others. He is the Head of scientific and researcher school “The features of formation and forecast of macroscopic qualities of micro-inhomogeneous objects and systems”.
This study adpts the concept of Augmented Creativity to teach young children and children with special needs. The goal is to open new possibilities for interaction and support education with creative activities throught tecnology.There is a gap in teaching-learning with technology in classroom for this reason this study aim to find to what extent computers, mobile devices and disrruptiv technologies can assist teachers in the creative process. In this context the study describes the experience of building hybrid interfaces that combine tecnology with traditional educational resources. A total of 90 participants divided in three groups completed some tasks which consisted of generatingnew educative resources with tecnology. The participants were primary and secondary education teachers, who worked during six sessions in six groups: Language and Literature, STEM (Science, Technology Engineering and Mathematics), Physics and Chemistry,Natural Sciences, Social Sciences and Entrepreneurship. Through Design Thinking methodologies that focuses on end users’ experience and the co-creation of solutions, teachers designed three hybrid interfaces: 1. Interactive books, combining traditional fairy tales books with mobile devices, where QR codes and NFC tags give life to the stories; 2. Educational Board Games, where augmeted reality markersgive an extra information to the players; 3. Tangible educational resources, which inetegrateMakey-Makey device and Scratch with fruit, clay, aluminum foil or water to build laboratory. The results of this experience show an increase in creativity of teachers that use the augmented creativity tool to develop child friendly inerfacesnecessary to develop new skills in them.
Janio Jadán-Guerrero has completed his PhD from Universidad de Costa Rica, Costa Ricaand Universidad Politénica de Valencia, España. He is the director of Mechatronics and Interactive Systems (MIST) Research Center, at Universidad Tecnológica Indoamérica, Ecuador. He has published more than 30 papers in reputed journals and has been serving as an editorial board member of repute (Revista CienciAmérica, International Design & Children, and others). Johann Jadán-Altamiranois student of Electronic Engineer from Universidad San Francisco de Quito, Ecuador. He participed in World Universities Debating Championship (WUDC 2018), México. He has published a paper of Science and Tecnology and has been participating in some Congreses of Computer and Electronic Science.
Waiter robots have been introduced in the food and beverage industry in the early 2000s in a number of countries, especially in Asia but with mixed receptions. Poor designs, high costs and operation constraints are some reasons for the robots not being implemented on a larger scale. In this paper, we discussed some solutions that address these problems. To keep cost manageable, open source software algorithms for mobile robotics and integration with mobile Wi-Fi technologies currently in use in F&B settings are used. Wi-Fi apps such as the QR code allow for the tracking of customers’ location and a secured local area Wi-Fi provides for the integration to business system. A simpler mechanical design was adopted for the two types of bots developed – a server and a social robot; both shared the same base design, sensors suite and electronics and motion control system. Using digital maps and laser sensors, the server robot is able to navigate autonomously from a serving point to a target point #1 5 m away (repeated 5 times) to within 0.5m, and then proceeded to target point #2 5 m away in a different direction, and arrived to within 0.5 m (also repeated 5 times). The social robot was evaluated at a stationary position using voice and facial expressions when approached by a human. It is popular with younger patrons.
Dr Edwin Foo is a Lecturer of the Mechatronics & Systems Integration Group at Nanyang Polytechnic. He graduated from Loughborough University of Technology with first class honours in Electrical and Electronic Engineering. He also obtained his PhD from Loughborough University working in control engineering pertaining to active suspension. After graduation, he joined TRW (Automotive, UK) as a research engineer to develop new sensor fusion algorithms for driverless car. In 2002, he joined Singapore Technologies Dynamics as a Control Engineer to develop an autopilot control system for an unmanned aircraft. He was later transferred to Singapore Technologies Aerospace to further work on various unmanned aircraft projects. In 2005, he joined Nanyang Polytechnic as a lecturer to teach C programming, control system engineering and embedded system technology. Through the years, he was involved in several local robotics competitions, industrial projects and Temasek Foundation, Technical and Vocational Education and Training (TVET) training program. In 2010, he was awarded a $239,000 fund in the “Development of a Semi-Portable Rehabilitation System for Upper Limb Disability” from NRF (NRF2010OTH-TRD001-015).
In this paper, we study the nonlinear fractional dong equation and its properties Chaos, relativistic energy-momentum, electrodynamics, and electromagnetic interactions. These properties that have many of benets on a lot of applications in various science elds. Relativistic energy is that kind of energy which is a result of objects mass while the relativistic momentum is considered as the result of the rest of mass energy and kinetic energy of motion. On the other hand, we treat with the force that results of the interaction between electrically charged. This force called electromagnetism or Lorentz force which contains magnetism and electricity as various phenomena of the same source. Dung equation is one of the major equations in engineering branch of science. It describes the energy of a point mass which considered as a periodically forced oscillator. We implement twelve dierent methods on the nonlinear fractional dung equation to nd many dierent formula of explicit solutions and approximate solutions. Obtained solutions show many benets on its applications. We also study the stability, magnetic and electric elds of solutions to show the ability of these solutions for the application. All obtained solutions are checked by Mathematica program.
An environmental monitoring process consists of a continuous collection, analysis and reporting of observations or measurements of environmental characteristics. Different environmental components can be described and qualified (soil, air, water...) using different types of sensors. These latter perform regular measures that are sent to a central system to be analyzed using specific diagnostic tools. The final objective is to discover and infer new knowledge about the environment, in order to help the administrator to make the good decisions. A main purpose of the monitoring system is to detect the anomalies, also called "events". Different data mining techniques are applied to the collected data in order to infer in real-time aggregated statistics useful for anomalies detection and forecasting purposes. This process helps the administrator to supervise the observed system and to take quickly the right decisions. The whole process, from the data collection to data analysis, leads to two major problems : the management of the data volume and the quality of this data. On the one hand, a sensor generates the data in the form of a stream that consists of a large volume of data sent to the monitoring system continuously. The arrival rate of the data is very high compared to the available processing and storage capacities. The monitoring system is thus faced with a large amount of data for which permanent and exhaustive storage is very expensive and sometimes impossible. That's why we need to process the data stream in one pass, without storing it. However, for a particular stream, it is not always possible to predict in advance all the processing to be performed. On the other hand, in a real-world such as sensor environment, the data are often dirty, they contain noisy, erroneous, duplicate, and/or missing values. This is due to many factors : local interference, malicious nodes, network congestion, limited sensor accuracy, harsh environment, sensor failure or malfunction, calibration error, and insufficient sensor battery. As in any data analysis process, the conclusions and decisions based on these data may be faulty or erroneous if the data are of poor quality. In order to deal with these problems, we propose the native filtering of data streams as a solution to overcome the two problems related to the data collection : the huge volume of generated data and their poor quality. This solution consists of filtering the data qualitatively (evaluating and improving the quality of the received data), and then, quantitatively (summarizing the data).
Miss. Rayane El Sibai received her master degree in software engineering from the Antonine University, Beirut, Lebanon, in 2014. She obtained her Ph.D. degree in computer sciences from the Pierre and Marie Curie - Sorbonne University, Paris, France, in 2018. Currently, she is an instructor at Al Maaref University, Beirut, Lebanon. She is also an active researcher working in collaboration with the Institut suprerieur delectronique de Paris, Paris, France. Her research interests include Data streams processing, Data summarization, Data quality, and Cloud Computing
Hospital Information Systems (HIS) are responsible for the production, analysis and dissemination of different data in hospital facilities. As the availability of these data increases, it also heightens the problem of data quality. Indeed, despite the fact that several solutions are developed to deal with data quality, the dirty data persist. In this context, HIS would be provided by an accurate solution for dirty data detection and resolution. The present work proposes a rule-based approach for both describing a subset's dirty data occurring in Discharge Data Summaries, called Discharge Dirty Data Summaries (D3S) and assisting health practitioners in repairing it. Precisely, the proposed solution provides an automatic-way to deal with D3S. An empirical evaluation of the proposed approach with real clinical data provides preliminary evidence for the effectiveness of our proposal.
Lydia RABIA has a Master degree in Mathematics and Computer Science, option: Information Systems and Web Technologies from Polytechnical Military School, Algeria. She is responsible for software development in the Department of Medical and Hospital Informatics at the Central Hospital of Army, Algiers, Algeria. She is a teacher in the Polytechnical Military School, Algiers, Algeria and she supervised several engineering dissertations. She has published and participated with her works in renowned international conferences.
Undoubtedly, optimization is used everywhere, such as engineering design, business planning, routing of the internet and even holiday planning. Finding an appropriate solution is a challenging issue. Currently, by the use of computer simulation with various efficient stochastic search algorithms, it becomes easy to face and solve such optimization problems. Generally, there are two types of stochastic algorithms: heuristic and metaheuristic, however their difference is small. Two major components of any metaheuristic algorithms are: intensification and diversification, or exploitation and exploration. Diversification means to generate diverse solutions so as to explore the search in a local region by exploiting the information that a current good solution is found in this region. This is in combination with the selection of the best solutions. The selection of the best ensures that the solutions will converge to the optimality, while the diversification via randomization avoids the solutions being trapped at local optima and, at the same time, increases the diversity of solutions. The good combination of these two major components will usually ensure that the global optimality is achievable. Metaheuristic algorithms can be classified in many ways. One way is to classify them as: population-based and trajectory-based. For example, Gas and PSO are population-based as they use a set of strings or agents. On the other hand, simulated annealing uses a single agent or solution which moves through the design or search space. The steps or moves trace a trajectory in the search space, with non-zero probability that this trajectory can reach the global optimum.
Vahideh Hayyolalam received her B.S. degree in Applied Mathematics from Payam-noor university of Tabriz, Iran, in 2005 and her M.Sc. degree in Computer Software engineering from Science and Research Branch, Islamic Azad University, Iran in 2016. Currently, she is a researcher at Islamic Azad University. Her research interests include Evolutionary computing, Optimization, Metaheuristic algorithms, Cloud computing and Service composition. she is recently a refree for these journals: "Journal of Experimental & Theoretical Artificial Intelligence" and "American Journal of Networks and Communications".
Scientific and technical progress is developing rapidly and also making great opportunities for biometric technologies. The application of these technologies has an important role in preventing numerous dangerous events. One of the effective ways in detection and neutralization of criminals is namely the opportunities that biometric identification technologies create. The ways of ensuring security in biometric network are clarified. The effective ways of recognition are investigated and their comparative analysis is implemented. The ways of increasing recognition in biometric network are studied and new method is suggested. In order to achieve high level recognition, it is expedient to use unified platform. The advantage of the implementation of identification algorithms on biometric features is that the results of identification on all biometric features are becoming operator in a single way. Subsequently, the results transmission, their measurement and approaches to decision-making are designed for the organization of biometric identification algorithms. Unified system of biometric search and national criminalities is formed in a functional level as follows: 1. Realizing the identification according to the data collection, registration cards and descriptions of wanted persons; 2. The data processing and storage, computing operations; 3. Organization of biometric data bank and registration cards database; 4. Definition of the search parameters, delivering the data obtained as the result of face detection, search results analysis expertise, and so on. One of the ways is to implement human recognition by using biometric networks. There is no accurate definition of biometric networks. Databases obtained from different sources are collected in biometric network. Identification of a person is implemented by using biometric characteristics and appropriate biometric features collected in the databases. For instance, human facial images, fingerprints, palm patterns, iris, etc. can be example to biometric characteristics. Implementation of identification according to multiple biometric characteristics of a person increases accuracy of recognition, for this purpose distinct algorithms developed for identification are used. Identification of any person can be implemented on the basis of the collected databases in biometric networks.
Shafagat Mahmudova defended the thesis on the "Development of methods and algorithms for human face recognition on the basis of photo-portraits" in the specialty 3338.01 - "System analysis, control and information processing" and gained PhD degree in Technical sciences. She is associate professor. She is the author of 49 articles and 44 theses. 53 of them were published in the international journals.
In real applications of hub networks, the travel times may vary due to traffic, climate conditions, and land or road type. To facilitate this difficulty, in this paper the travel times are assumed to be characterized by trapezoidal fuzzy variables to present a fuzzy capacitated single allocation p-hub center transportation (FCSApHCP) with uncertain information. The proposed FCSApHCP is redefined into its equivalent parametric integer nonlinear programming problem using credibility constraints. The aim is to determine the location of p capacitated hubs and the allocation of center nodes to them in order to minimize the maximum travel time in a hub-and-center network under uncertain environments. As the FCSApHCP is NP-hard, a novel approach called knowledge-based genetic algorithm (KBGA) is developed to solve it. This algorithm utilizes two knowledge modules to gain good and bad knowledge about hub locations and then saves them in a good and bad hub memory, respectively. As there is no benchmark available to validate the results obtained, a genetic algorithm with multi-parent crossover is designed to solve the problem as well. Then, the algorithms are tuned to solve the problem, based on which their performances are analyzed and then compared together statistically. The applicability of the proposed approach and the solution methodologies are demonstrated. Finally, sensitivity analyses on the discount factor in the network and the memory sizes of the proposed KBGA are conducted to provide more insights. The results show that appropriate values of memory sizes can enhance the convergence and save population diversity of KBGA simultaneously.
Amir Hossein Niknamfar received his B.Sc. and M.Sc. degrees both in Industrial Engineering at the Islamic Azad University, Qazvin Branch, Iran, in 2009 and 2013, respectively. He is recently Member of American Institute of Industrial and Systems Engineers, U.S.A., and a referee for these journals: Applied Mathematical Modelling Applied Soft Computing Assembly Automation Computers & Industrial Engineering Expert Systems with Applications Knowledge-Based Systems Transportation Research Part E
Automated decision making tools are currently used in high stakes scenarios. From natural language processing tools used to automatically determine one's suitability for a job, to health diagnostic systems trained to determine a patient's outcome, machine learning models are used to make decisions that can have serious consequences on people's lives. In spite of the consequential nature of these use cases, vendors of such models are not required to perform specific tests showing the suitability of their models for a given task. Nor are they required to provide documentation describing the characteristics of their models, or disclose the results of algorithmic audits to ensure that certain groups are not unfairly treated. I will show some examples to examine the dire consequences of basing decisions entirely on machine learning based systems, and discuss recent work on auditing and exposing the gender and skin tone bias found in commercial gender classification systems. I will end with the concept of an AI datasheet to standardize information for datasets and pre-trained models, in order to push the field as a whole towards transparency and accountability.
Timnit Gebru is the technical co-lead of Ethical AI at Google and just finished her postdoc in the Fairness Accountability Transparency and Ethics (FATE) group at Microsoft Research, New York. Prior to joining Microsoft Research, she was a PhD student in the Stanford Artificial Intelligence Laboratory, studying computer vision under Fei-Fei Li. Her main research interest is in data mining large-scale, publicly available images to gain sociological insight, and working on computer vision problems that arise as a result, including fine-grained image recognition, scalable annotation of images, and domain adaptation. She is currently studying the ethical considerations underlying any data mining project, and methods of auditing and mitigating bias in sociotechnical systems. The New York Times, MIT Tech Review and others have recently covered her work. As a cofounder of the group Black in AI, she works to both increase diversity in the field and reduce the negative impacts of racial bias in training data used for human-centric machine learning models.
Despite the numerous advantages associated with the flexible manipulators, link vibrations stand in the way to reaping these benefits. This leads to time wastage waiting for vibrations to decay to safe operating levels and the possibility of mechanical failure due to vibration fatigue. Research has shown that collocated control is very effective in the control of position. However, it is very poor in increasing system damping which is very important in the control of flexible robots. This paper presents direct strain feedback, a non-collocated vibration control technique by tuning the feedback gains using artificial neural networks on a 3D flexible manipulator. Online backpropagation was developed in MatLab Simulink and implemented in dSPACE environment for practical experiments. Results show significant reduction in the link vibration relative to the performance of fixed feedback gain.
Waweru Njeri graduated from Jomo Kenyatta University of Agriculture and Technology in Kenya with Bachelor of Science and Master of science inTelecommunication Engineering in 2009 and 2014 respectively. He has been a PhD student in Mechanical Engineering of Gifu University since 2016. His current research interests include, vibration control of flexible manipulators, intelligent control of electro-mechanical system, robot dynamics and design
Several psychology models have been proven to have a significant accuracy in deducing patterns of traits which affect productivity of students. However, this category of analysis requires meticulous observation of an individual which is tedious considering the population constraints. If somehow we manage to produce a system which is able to learn and process a custom behavior skillset, we might just be able to bring about a revolution in teaching methodologies. This is where we use AI based automation techniques.
Rapid urbanisation has brought about great challenges to our daily lives, such as traffic congestion, environmental pollution, energy consumption, public safety and so on. Research on smart cities aims to address these issues with various technologies developed for the Internet of Things. Very recently, the research focus has shifted towards processing of massive amount of data continuously generated within a city environment, e.g., physical sensing data on traffic flow, air quality, healthcare, and participatory sensing data. Many machine learning techniques have been applied to process and analyse city data, and to extract useful knowledge that helps citizens better understand their surroundings and informs city authorities to provide better and more efficient public services. Deep learning, as a relatively new paradigm in machine learning, has attracted substantial attention of the research community and demonstrated greater potential over traditional techniques. This talk draws a clear landscape of the latest research on deep learning from smart city data from two perspectives. While the technique-oriented review pays attention to the popular and extended deep learning models, the application-oriented review emphasises the representative application domains in smart cities. It also identifies a number of challenges and future research directions for this rapidly evolving field, e.g., efficient deep networks, emerging deep learning paradigms, and deep learning based knowledge fusion. We hope this will motivate more smart city researchers to further explore this exciting research topic and develop more creative and computationally practical deep learning models specifically designed for smart city data of various types and modalities.
Dr. Wei Wang received his PhD in Computer Science from University of Nottingham, in 2009. He is currently an associate professor at the Department of Computer Science and Software Engineering, Xi’an Jiaotong Liverpool University, China. His research interests lie in the broad area of data and knowledge engineering; in particular, knowledge discovery from textual data, social media data and data on the Internet of Things, semantic search and deep learning for data processing and knowledge discovery. He has published several highly cited works on Internet of Things and semantic search and more than 40 papers in reputed journals and conferences.
Automated and precise classiﬁcation of MR brain images is tremendously important for medical analysis and interpretation. Over the last decades, numerous are the classification techniques that have been proposed. In this work, we make a comparative study between two techniques that classify the type of Tumour in a given MR brain image as Benign or Malignant Tumour . The ﬁrst technique applies the Discrete Wavelet Transform 2D (DWT-2D) in order to extract features from images. Then, is applied the Principle Component Analysis (PCA) in order to reduce the extracted features dimensions. The reduced features are then presented to a random forest as a Classifier. The second classification technique is also based on DWT-2D and Principal Component Analysis (PCA) and using Kernel Support Vector Machine (KSVM). In our comparative study we have applied these two classification techniques to 11 MR brain images classified as Benign Tumor and 12 MR brain images classified as Malignant Tumor. The first classification technique permits to obtain 81.81% for Benign Tumor and 83.33% for Malignant Tumor. The second classification technique permits to obtain 100% for both Benignand MalignantTumor.
The task of automatic summarization has increased its interest among communities of NLP and Text mining. Summariza-tion helps in reducing the text in the shortest way possible such that significant properties in the text remain preserved and important in-formation can be gained from the text. With the huge amount of re-views posted online, a summary is thus required, to influence a per-son in making correct decision considering all its important aspects. In this paper, summaries are generated using extractive based tech-nique which is well formed to convey the gist of the text. The ad-vantages of our proposed method lie in greater computational effi-ciency, data understanding, robustness and handling sparse data. This paper discusses the aspect based opinion summarization prob-lem by proposing a novel unsupervised method by combining theo-ries of rational awareness with sentence dependency trees to identify aspects. Thereafter, Principal Component Analysis based technique is discussed for generation of aspect based summary. The results are carried out on dissimilar datasets consisting of numerous opinions and comparison with previous based approaches demonstrates the success of the work. The accuracy results are reported with F-scores as 0.14197 (ROUGE-1) and 0.03021 (ROUGE-2) for extractive based summarization on Opinosis dataset. The three random indi-viduals are contacted for reference summaries in order to compare the system generated gold summaries on the real dataset for con-ducting subjective evaluation.
Surbhi Bhatia has completed her PhD from Banasthali Vidyapith, Rajasthan and is currently teaching in K.R. Mangalam University, Gurgaon as an Assistant Professor and has almost 7 years of teaching experience. She has published 20 research papers in various International/National Journals and has been serving as an editorial board member of repute. Her areas of interests include Data mining, Machine Learning and Information Retrieval.
Designing, Training and Testing Deep Neural Networks for a particular business or scientific or a real-world requirement is always a challenge and does not guarantee that it will always provide the desired level of accuracy for different data sets. As the deep network grows in size and depth the complexity becomes almost unmanageable in many scenarios. Also, their hyper parameter tuning becomes more and more complex and tedious. All these challenges comes from the very fundamental limitations on which Neural Network is based on. Also there is limited theoretical validation and technical sophistication of many of the popular network designs. Proposed approach named "Layered Approximation" is a design principle which can be applied to a different deep neural networks during its design which can address some of the Training & Testing challenges to a great extent. Also, with Layered Approximation design approach applied to a Deep Neural Network, the requirement of huge training data and huge computational power can also be addressed to certain extent and thus the reduction in training time and testing complexities.
Utpal Chakraborty is an eminent Data Scientist and researcher having 20 years of experience, including working as an Principal Architect in IBM, Capgemini, L&T Infotech and other MNCs in his past assignments. At the moment he is Head of Artificial Intelligence at YES BANK. Utpal is a well-known speaker and researcher on Artificial Intelligence, Agile and Lean speaking at conferences around the world. His recent research on machine learning titled "Layered Approximation for Deep Neural Networks" has been appreciated in different conferences and institutions. He has also demonstrated few completely out-of-the-box hybridized Agile & Lean implementations in different industries, out of those his implementation framework for movie productions and government enterprises are remarkable and appreciated by the agile & lean communities across the globe.
Nowadays, the emergence of artificial intelligence and deep learning have led to major advances in several areas: medical imaging, remote sensing, big data analysis, pattern recognition, autonomous vehicles, etc. In this study, a new idea is developed for improving the agent intelligence. In fact, with the presented Convolutional Neural Network (CNN) model for knowledge classification, the agent will be able to manage its knowledge. This new concept allows the agent to select only the actionable rule class, instead of trying to infer its whole rule base exhaustively. In addition, through this research, we developed a comparative study between the proposed CNN approach and the classical classification approaches. As foreseeable the deep learning approach outperforms the others in term of classification accuracy.
Amine Chemchem received the license in academic computer science at the University of Jijel-Algeria in 2009. After-that, he joined USTHB university, and received the M.S degree in intelligent computer systems in 2011. After graduation, Amine was admitted as Ph.D student at LRIA (laboratory of research in artificial intelligence) in USTHB university. His PhD thesis, which is directed by Prof. Habiba Drias, is entitled "from data mining, to knowledge mining: An incremental learning and multilevel approaches". In 2015, Amine graduated his PhD degree in computer science. From September 2015, to January 2018, Amine was a lecturer-researcher at the computer science department of Saad Dahleb university, Algeria, and at the Algerian Marines High School as a part-time teacher. Since January 2018, Amine is a Post-Doctoral researcher at CReSTIC Lab at the University of Reims Champagne Ardenne, his research domain is focused on artificial intelligence, Data mining, Machine and Deep learning, GPU, Multi-agents systems, and Meta-heuristics.
Edge detection technique needs to be assessed before use it in a computer vision task. As dissimilarity evaluations depend strongly of a ground truth edge map, an inaccurate datum in terms of localization could advantage inaccurate precise edge detectors or/and favor inappropriate a dissimilarity evaluation measure. we demonstrate how to label these ground truth data in a semi-automatic way. The most common method for ground-truth definition in natural images remains manual labeling by humans Berkeley Segmentation Dataset proposed by Martin and Fowlkes in 2001. These data sets are not optimal in the context of the definition of low-level segmentation. Errors may be created by human labels (oversights or supplements); indeed, an inaccurate ground truth contour map in terms of localization penalizes precise edge detectors and/or advantages the rough algorithms. Then, an incomplete ground truth penalizes an algorithm detecting true boundaries and efficient edge detection algorithms obtain between 30% and 40% of errors. In fact, this new label processes in return to hand made ground truth. Indeed, in the first time, the contours are detected involving the convolution of the image with [-1 0 1 ] vertical and horizontal masks followed by a computation of a gradient magnitude and a suppression of local non-maxima in the gradient direction. Concerning color images, [-1 0 1 ] vertical and horizontal masks are applied to each channel of the image followed by a structure tensor. In a second time, undesirable edges are deleted while missing points are added both by hand. Using the [-1 0 1 ] mask enables to capture the majority of edge points and corners without deforming small objects, contrary to edge detectors involving Gaussian filters. By comparison with a real image where contours points are not precisely la- belled, experiments illustrate that the new ground truth database allows to evaluate the performance of edge detectors by filtering.
Hasan Abdulrahman1 has completed his PhD from University of Montpellier 2017, Ecole des mines ales, France. He is the asst. professor in the Northern Technical University, Department of Technical Computer systems. He has published more than 16 papers in reputed journals and has been serving as an editorial board member of repute.
The main goal of the work is to create software for a microcontroller controlling a quadcopter, aimed at determining the trajectory of the flight that follows a moving vehicle (object). The wide use of drones nowadays and the desire to use modern knowledge have inspired the authors to assemble their own flying vehicle and to develop and program its control system. Detecting objects is one of these fundamental problems of computer vision which usually turn out to be difficult. In many respects, this problem is similar to other tasks related to computer vision, because it involves creating unchanging solutions in the case of deformations and changes in lighting and viewpoints. What makes object detection a real problem is that it covers both the location of the object and the image classification areas. Although object detection algorithms are well-suited to RGB images, they are not robust enough and cannot be directly applied to 3D cases. On the contrary, looking for ways in clouds of points does not seem to be rational. Usually, the detection of key points is applied using manual functions. In our methodology, we try to avoid the disadvantages of these previous approaches, while at the same time focusing on their strengths. We propose an approach that uses the advantages of convolutional neural networks (CNN) to perceive RGB and 3D images as aggregations of data. By combining object detection on RGB images and the scores of object detection with filtering and general customization, we are able to achieve robust detection of objects in 3D scenes.
Siva Ariram has completed his MSc in Control Engineering and Robotics from Gdansk University of Technology, Poland and currently in his 1st year of doctoral studies under Biomemitics Intelligent and Systems Group(BISG) at University of Oulu, Finland. He has published one paper in International journal of Medical Sciences.
Faults can appear at several functional areas of a complex cellular network. However, the most critical domain from a fault management viewpoint is the Radio Access Network (RAN). The fault management of network elements is not only difficult but also imposes high costs, both in capital investment (CAPEX) and operational expenditures (OPEX). The Self-organizing network (SON) concept has emerged with the goal to foster automation and to reduce human involvement in management tasks. SH is the part of SON that refers to autonomous fault management in wireless networks, including performance monitoring, detection of faults and their causes, triggering compensation and recovery actions, and evaluating the outcome. It improves business resiliency by eliminating disruptions that are discovered, analyzed and acted upon. With the advent of 5G technologies, the management of SON becomes more challenging. The traditional SH solutions are not sufficient for the future needs of the cellular network management because of their reactive nature, i.e., they start recovering from faults after detection instead of preparing for possible faults in a preemptive manner. The detection delays are especially problematic with regard to the zero latency requirements of 5G networks. In order to address this challenge, the existing SON enabled networks need to be upgraded with additional features. The AI techniques can be effectively used to self-optimize, self-configure and self-heal the cellular network. This situation pushes operators to upgrade their SONs from reactive to proactive response and opens doors for further research on AI assisted SON. In this talk, we will discuss some aspects of SON solutions for 5G.
Dr. Muhammad Zeeshan Asghar is a Post-doctoral Researcher at Faculty of Information Technology, University of Jyväskylä, Finland. He received his master degree in mobile systems and doctoral degree from Department of Mathematical Information Technology at University of Jyväskylä in 2010 and 2016 respectively. He contributed results to several international conferences and journals since 2010 in the field of telecommunications as part of doctoral studies at University of Jyvaskyla, Finland. Currently, he is leading half a million euro Project on 5G Cognitive Self-organizing Networks at University Of Jyväskylä. He has one patent granted (US Patent no: 9326169) and another patent application is filed. He has global collaborative research network spanning both academia and key industry players in field of wireless communications He participated in several research projects in Nokia Solutions and Network, NSN Finland (former Nokia Siemens Networks) from 2009 till 2014. He conducted three months intensive research on 5G SON, at 5G Innovation Center (5GIC), University of Surrey, UK in year 2016. His research interests primarily focus on Self-organizing Networks, 5G technology, Heterogeneous small cells network, massiveMIMO, Artificial neural networks, cognitive computing, software-defined network and Internet-of-Things (IoT).
Based on parameters obtained via big data mining, mathematical modeling, agent-based modeling & simulations, we predict the whole four-stage process of online collective action: (a) the Prepare stage. We predict the total amount based on machine learning and indicators building. Factors such as societal strength, Gene index, living standard, urban rate, and industry structure are utilized to measure and estimate the input energy of collective actions; (b) the Outbreak stage. The outbreak is visible but too fast to be observed. The prediction of outbreak is based on why the outbreak happens. The outbreak mechanism will be inspected based on the threshold model, and therefore the outbreak timing is predicted based on its mechanism; (c) the Peak stage. The peak phased determines the energy and power of collective actions to the opponents, and need to be calculated and predicted if we intend to back-calculate the whole process of collective actions. Explore the determining factors of peaks and solve the peak based on the function. Besides, the stability of peaks under big data will be checked; and (d) the Vanish stage. The Vanish period can be predicted based on vanishing factors, i.e. attention shift, marginal decrease, strong heterogeneity, context shift, and feature shift.
Peng Lu has completed his PhD from Tsinghua University and postdoctoral studies from Tsinghua University, China. He is now the professor of sociology. He has published more than 25 papers in reputed journals and has been serving as an editorial board member of reputable academic journals.
As robots systems become more complex, the design and development of hardware become more difficult, from design patterns to testing methodologies, the development of specialized hardware to perform high-performance tasks is getting for important. Robotic systems such as quadcopters have not only been very popular in recent years not only in terms of applications, sensors, control algorithms, but also in hardware development for tasks such as digital signal processing, sensor fusion, and flight path planning. For this reason, a methodology and platform for the design, development, and testing of hardware systems for quadcopters have become popular. According to that, Hiles hardware design methodology is proposed to design from vertical integration of systems of sensors, dynamic models and advanced control algorithms to validate not only individual hardware designs but the complete integration of all the electronic system of the quadcopter. This approach allows not only the design and modeling of advanced fusion methods such as higher order Kalman filters, but also stability control algorithms such as optimal controllers and the comparison with conventional controllers as well focusing on hardware issues as delay times, robustness of the design and integration with the rest of the electric system of the quadcopter.
Fernando Jimenez was born at Bogota, Colombia in 1958. Doctor Jimenez is an Electric Engineer (Uniandes, 1979), Diplome d Etudes Approfondies in Automatic Control (SupAero, 1983) and Doctor in Systems (INSA, 2000 and Uniandes, 2001). He is Associate Professor at Uniandes since 1986.
As Veruggio and Operto maintain, the design of Roboethics requires the combined commitment of experts of several disciplines, who working in transnational projects, committees, commissions, have to adjust laws and regulations to the problems emerging from the scientific and technological achievements in robotics. Roboethics requires the involvement of disciplines like Robotics, Computer Science, AI, Philosophy, Sociology, Ethics, Theology, Biology, Physiology, Cognitive Sciences, Neuroscience, Psychology, Industrial Design. The development of Robotics and Roboethics, favors the birth of new curricula studiorum and specialities, necessary to manage a subject so complex (Forensic Medicine can be considered a good example of this new trend). We would consider the case of RAS, the advantages and disadvantages to adopt it by referring to the latest development in the international context. Scientific community considers Robotic Surgery as a nowadays clinical reality, although there are no definitive data from experimental trials, well accepted by both surgeons and patients. Recent data suggest that confidence developed by surgeons with robotic technology and the increasing interest of patients are driving Health administrations to overcome the impact of the high initial costs and to invest in this advancing technology.
In Jamaica, athletics is established and basketball is a growing phenomenon as it requires little space and it can be used as a tool for social intervention in communities. There is a drive to boost the opportunities for science, technology, engineering and mathematics (STEM) education in the country. One creative thought is to link robotics and basketball as a multifaceted tool to accomplish both objectives. The idea is to engage students and graduate students in building robots, which can shoot baskets and then possibly compete in a competition against other robot teams. Our plan is to start with a single robot system within a defined space shooting from set predefined coordinates. Having established data from these base experiments, we will then increase the complexity where robots will be placed on teams three per team) and are able to compete. Robots are autonomous and limited to mobile robots with shooting and maneuvering mechanisms. The exercise in sensor application such as laser technology, algorithm development, programming, interpretation and final actuator design will challenge the design and creative ability of everyone associated with the team. This work is related to previous activity where UTech teams entered international robotic competitions (IEEE Southeast on 2012). Our goal is to have a knowledge transfer of this research exercise and tackle other real world engineering problems in manufacturing. The research efforts will also develop high schools and youths by providing a stimulating environment to harness their abilities.
Resource sharing is a problem that can be found in many fields. In transportation systems, drivers meet such problems in intersection, merging and lane changing. Despite traffic regulations and signalization, humans still communicate together explicitly or implicitly by their driving style to solve conflicts and avoid deadlock or a consistent traffic jam. Drivers yield the right of way, tailgate, tighten the queue and even move back to recover a smoother traffic. Unfortunately, the speed of vehicles coupled with the limited perception reduces the scope of cooperation. The progress of vehicles towards more automation and communication offers the opportunity to provide a rational sharing of the infrastructure. Local conflicts, especially at intersections have received a particular attention this last decade. Traditional optimizations approaches and control theories provide good results. However, the problem is hard to be solved without approximations, mainly because of the dynamic nature of the traffic flow merging at the intersections and of the hybrid nature of the problem, i.e. discrete when chosing which vehicle go first and continuous for controling the acceleration. In order to extend the solutions we proposed distributed algorithms inspired from human behaviors that allows an emergence of a kind of cooperative intelligence. The substantial gains obtained at intersections and at multi-lane roads encourage further studies of the approach.
Abbas-Turki Abdeljalil is Associate Professor at UTBM. He managed several projects related to transportations and mobility such as train timetabeling, design of transportation services of TGV station (roads and public transportation including trains), gate layout at subway stations and more recently a real implementation of cooperative intersections of autonomous vehicles. He published more than 50 papers dealing with the space sharing in transportation systems.
Data compression encodes data with a smaller size than the original one, in order to reduce the cost of storage or transmission. It is an important problem in different areas, such as medical imaging, sound processing, speech storage and many others. A seminal contribution relies on the work of Shannon, which gave the basis of Information Theory (IT). That work gave birth to the large majority of techniques proposed since then for compression. Compression can be lossless, such as the well known PNG, or lossy such as MP3 or JPEG. One fundamental concept in IT is that of entropy, which can be regarded as the minimum number of bits needed to encode the original data with a lossless compression. In this case, the compression rate is the same as the entropy rate. There is a limit, however for a lossless compression, which is when the data has a uniform distribution. In fact, there are even challenges in the Web regarding this. In this work, we face this problem. The rationale of our technique can be explained using a simple example. Without loss of generality, we can write any data in matrix format. Consider a 2 X 2 matrix. If this matrix is orthonormal, it can be written as a rotation matrix in terms of a single angle. I.e., a matrix originally with four elements can be represented with a single one! We explore this idea in our technique. Our approach is based fundamentally on two basic tools: a) singular value decomposition (SVD), which is a technique to write a given matrix as a product of three others, where two of them are orthonormal ones; b) We propose a form to write orthonormal matrices based on a single angle. Then, we take any matrix and firstly decompose it using SVD. Since two of the matrices are orthonormal, we rewrite them as functions of angles. In terms of geometry, any matrix is generated from an orthonormal one in the following way. SVD firstly rotates a given orthogonal matrix, then stretches and rotates again, thus generating the wanted matrix. We tested the proposed method in different experiments to compress uniformly distributed data. We found up to 5.11 0.75 of compression ratio.
Allan Kardec Barros obtained his PhD in biomedical signal processing in 1998 from Nagoya University, Nagoya, Japan. He was a Frontier Researcher at the RIKEN-Biomimetic Control Research Center. He is currently an Full Professor at the Federal University of Maranhao, Brazil and a collaborator at the Brain Science Institute, RIKEN, Japan. He has written more than 100 publications on biomedical signal processing and related applications. He was a co-editor-in-chief of the International Journal on Computational Intelligence and Applications and is requested regularly to review papers from reputed journals, as well as from a number of conferences. He was a Member of the editorial board of Signal Processing journal. He has won awards in recognition for his contributions to the development of Science in Brazil. He has published more than 80 papers in reputed journals.
The rapid growth in information and communications technology (ICT) has led different industries and organizations to progressively employ these infrastructure elements, applications and off-the-shelf technologies to enhance the quality of existing service and to reduce their costs. The advancement in the networked or Internet-connected devices, data analysis and artificial intelligence can effectively facilitate management of assets, communications, etc., which results in reduced costs. In this era of digitization, the Internet of Things (IoT) opens new opportunities in different industries that expands and shifts it up to a vast but yet vulnerable interconnected network of people and objects. Despite it offers convenience and efficiency to the users so that they can achieve better quality of life, it poses new security and privacy challenges in terms of the confidentiality, authenticity, and integrity of the data sensed, collected, and exchanged by the IoT objects. The threats and Cyber-attacks in IoT devices and services could be related to the high degrees of connectivity present, cloud computing, cyber physical and social systems and it is necessary to identify and assess the possible risks and resolve them. Today’s cyber physical systems span IoT and cloud-based data center infrastructures, which are highly heterogeneous with various types of uncertainty. In this study, I will show the cyber security issues in IoT systems, especially in smart industries such as automobile and food industry and present a complete picture of its security status. In this talk, I first categorize different threats in IoT context, based on the comprehensive study of the related works. It helps us to understand the threats and attacks of the IoT system and infrastructure. Then I highlight the major aspects of cyber security issues in IoT systems and possible solutions for them.
Dr. Marzieh Jalal Abadi is researcher and consultant in the field of cyber threat intelligence, security analysis in IoT, cybersecurity risk assessment and human behavior analysis using machine learning algorithms. She has experiences in both academia and industry and provides customized cybersecurity solutions for development and implementations in complex environments. Marzieh was research fellow and project manager at Canadian Institute for Cybersecurity (CIC) and worked on different projects for financial institutes (TD Bank), smart Industries (McCain food Company) and R&D companies (IBM Canada). Prior to joining CIC, she worked as researcher in Cyber Physical Systems Research Group at Data61|CSIRO in Sydney, Australia while she was lecturer at university. Marzieh received her PhD in Computer Science and Engineering from UNSW in Sydney, Australia. During her PhD, she was research assistant in Data61|CSIRO and worked on modeling various sensory data from smartphones, vibration energy harvesting and text data from different online sources. Her researches are based on empirical data sets and it includes but not limited to time series, imbalanced data and heterogeneous data. Prior to her PhD, she completed her Master’s degree in Mobile and Personal Communication at King’s College, University of London and B.Eng. degree in Electronic Engineering at Shahid Beheshti University in Tehran, Iran. She has a broad set of interests in cybersecurity and privacy in IoT, threat modelling, machine learning algorithms, fake news analysis, and parallel and distributed iterative algorithms for data fusion.
Given a robot team that is equipped with multiple sensors and able to communicate via a wireless channel, we consider the cooperation problem of sensing and acting in environments with the goal of maximizing their shared utility. Compared with single-agent reinforcement learning (RL), multi-agent RL encounters much more difficulties due to environment nonstationary and partial observations. In these environments, agents should leverage data captured by various sensors and learn communication protocols to share information for solving the cooperation tasks. By embracing deep neural networks, we propose to build an end-to-end model that can fuse incoming multi-modal data and learn message generation protocols. This model leverages the attention mechanism that interactively focuses on the messages content, the local visual captures, radar information and utilizes the correlation of all modalities to better learn a cooperative control policy under the reinforcement scheme. Furthermore, with the communication mechanism, an agent will better infer its next transferred state by assigning different weights to a single sequence guided by the received messages, which will eventually demonstrate that fusing multi-modal data and learning communication protocols facilitates the multi-agent cooperation problems.
The objective of this presentation is to describe the tool developed by SEAD (Secretariat of State for Digital Advance) within the framework of the Human Language Technologies Plan (HLT Plan) launched by SEAD(Secretariat of State for Digital Advancement). The tool pretends to landscape the Artificial Intelligence (AI) sector in Spain compared to Europe and USA. It implements NLP (Natural Language Processing) and text analytics techniquesto process extensive datasets of patents, publications, projects, etc. The output of the tool offers a number of functionalities for different types of stakeholders. Therefore, AI in this presentation is both an end and a means. The presentation consists offour parts. First, an introduction to the HLT Plan describing the initiatives taken by the HLT Plan. The introduction includes the motivation behind the work and it outlines the objective of the tool, the beneficiaries, datasets used, etc. Second, the technical development of the tool is explained in detail. Third, the different functionalities offered are describedwith real case examples and results. Finally, conclusions and future work outlines the main findings and the next steps.
David Pérez Fernández. Adviser to the Secretariat of State for Digital Adavancement, Spain. Physicist and Mathematician, PhD in Differential Geometry. IT expert with a patent on dynamic routing and a long experience in SCADA, Web, Multimedia, GIS, Stock Exchange data analysis, Tributary data mining and analysis, etc. Main areas of interest: Natural Language Processing, ITC and mathematics. Designer and coordinator of the Spanish Plan to foster Language Technologies leading projects such as the SESIAD Information and Communications Technology (ICT) monitoring system.
Biometrics refers to the automatic identification (or verification) of an individual (or a claimed identity) using certain physiological or behavioural traits associated with the person. In the past few decades, biometrics has drawn significant attention for its potential in various applications. Many studies have and are being conducted to improve the overall accuracy and robustness of the techniques used for feature extraction, data acquisition, and image pre-processing. The performance of unimodal biometrics systems is affected by several problems such as background noise, signal noise and distortions, and environment or device variations. A multimodal biometric system can overcome the drawbacks of a single biometric approach. A study is carried out to propose multimodal biometric fusion using the modalities of face, iris, and conjunctival vasculature in bimodal and multimodal fusion scenarios. The features from each modality are first extracted using a local ternary pattern-based texture descriptor, viz., steady illumination colour local ternary pattern, and then, the features are fused at the feature level. The fusion is implemented using a concatenation of the extracted features from these modalities and similarity matching is conducted using zero-mean sum of squared differences. Furthermore, the homogeneity of the features is maintained by applying the same feature extractor for all the modalities. The feature extractor is sufficiently efficient in extracting all important features to be fused, which leads to better accuracy. Moreover, the increased number of feature sets because of the fusion at the feature level is reduced using a genetic algorithm, which further improves the accuracy. Experimental results show the effectiveness of the proposed method and results reveal that multimodal biometrics verification is considerably more reliable and precise than the single biometric approach.
Noorjahan Khatoon Completed PhD in Computer Science and Engineering from Sikkim Manipal University, Currently Working as Sr. Lecturer at Advanced Technical Training Centre, Bardand, East Sikkim
As robotics added huge impact on nowadays technology, this programming language is dedicated for robotics. It serves the users of robotics so they can program robots in easy way because all commands needed for the robots’ functions are available. JanusBot is high language which is close to English language you can write a simple command to move the robot. As a result, the user won’t have many restrictions in coding by JanusBot. It is specified for all robots kinds like mobile, stationary, and drones. All commands needed to control drones movement like departing and landing are available in JanusBot. Also, it provides communication between the server and the robots , and between the robots. Programmer doesn’t need to define variables type which makes learning how to program using JanusBot easier. It supports different types of modules like machine module where you can define all the machine specifications, program module which is the main program, procedure module which is void functions that doesn’t return any value, and function module that returns a value. It also supports different kinds of loops, conditional flow, and expression evaluation. The most interesting feature about JanusBot that it has virtual machine where the code is compiled into instructions set, and the virtual machine interprets it to get executed. Therefore, it can be executed and run on different robots platforms. Therefore, the programmer won’t worry about which kind of robots could be programmed by JanusBot. One script written by Janus can run on different kinds of robots.
Aseel is computer engineering student at AUM. Hyaat is electrical engineering stident at AUM. Monya is computer engineering student at AUM. Mariam is electrical engineering at AUM. Rawan is electrical engineering at AUM. Yehia has completed his PhD from the University of Western Ontario, Canada. He is assitant professor at American Univeristy of the Middle East in the computer engineering department. He has published more than 20 papers in many different aspects related to computer engineering.
We describe an Artificial Neural Network, where we have mapped words to individual neurons instead of having them as variables to be fed into a network. The process of changing training cases will be equivalent to a Dropout procedure where we replace some (or all) of the words/neurons in the previous training case with new ones. Each neuron/word then takes in as input, all the “b” weights of the other neurons, and weights them all with its personal a weight. To learn this network uses the backpropagation algorithm after calculating an error from the output of an output neuron that will be a traditional neuron. This network then has a unique topology and functions with no inputs. We will use coordinate gradient decent to learn where we alternate between training the “a” weights of the words and the b weights. The Idealistic Neural Network, is an extremely shallow network that can represent non-linearity complexity in a linear outfit.
Today’s several researches using multimodal sensing devices and communication technology and smartphones to detect human activity, health condition and mental states. Sensing types in these researches can be wearable, external and software/social media. Alternatively, we can classify sensor data in three sensing type: physiological, mental and environmental. The biomedical platforms developed in this regard can be classified into three categories: (1) monolithic platform-based, (2) textile (fabric)-based, and (3) body-sensor-network-based. WBSN is common approach for health monitoring systems. In this approach the data sense and collect by sensors and transmitted wirelessly to a base station (such as smart phone, Actigraphy devices or Smart watches) for long-term storage and processing. Typical sensors that can be found in smart phone are accelerometers, gyroscopes, ambient light sensors, proximity sensors, GPS, Bluetooth, microphone, video camera, magnetometer, etc. With the ever growing popularity and capabilities of smartphones, several research works started to use them as a platform for data collection studies. Although the sensors data is not sensing mental state itself, but can be driven of sensing behavior that is emerging from physiological data. For example, circadian rhythm disturbances have been shown in studies of activation in bipolar disorder, Skin conductivity and heart rate are factors which used to extend nervous system. In order to early detect migraine attacks are used sleep time data from wearable sensors, and so on. Conclusion & Significance: in this present we discuss about WBSN technologies and usage machine learning model training to extract knowledge from raw data.
N. Khozouie has completed PhD in Information Technology Engineering from the Qom University, Iran. Her research focuses on ontological data modeling for ubiquitous healthcare monitoring system. She is a professor assistant in computer Science and electrical engineering at the Yasouj University, Yasouj, Iran. She currently works on pregnancy healthcare monitoring system. Her main research interests include data-driven and intelligent approaches for pervasive healthcare monitoring systems, machine learning model training, WBSN and semantic web. She has researched on sematic web, ontology develop and ontology evaluation since 2009. Her research findings have been presented in IEEE Conferences and Journal Articles.
Detecting change is part of human nature. Humans build up virtual images of their surroundings over time and notice any change instantly which all happens thanks to their brain. While all of this is functions very well in the human brain, there are substantial benefits to an efficient change detection mechanism, but then created virtually. To replicate the structural and algorithmic properties of the human brain, this paper presents a system based on Hierarchal Temporal Memory (HTM) to detect change in a series of images. Moreover, it integrates the change-detecting HTM machine into a robotic security guard, responsible for monitoring and guarding valuable items. The system was designed to serve a security surveillance application to process and analyse several image inputs during non-working hours and identifying changes e.g. related to unauthorised access. The input to the system is a series of multiple images that were initially processed with MATLAB in order to resizing them from a 256*256 matrix to a smaller size, to then convert them into a single line vector in CSV format to suit HTM’s input requirements. In the various tests being performed, it is shown that HTM detects spatial as well as temporal anomalies, when they occur once or twice, however, if the pattern occurs more regularly, then it gets learned and becomes a familiar pattern, meaning it is no longer detected as an anomaly.
Dr Wim JC Melis currently works at the University of Greenwich. Previously, he worked as a patent examiner at the European Patent Office, and as a CEO Fellow at the University of Tohoku Japan. He obtained his PhD and second MSc at Imperial College London and first MSc in Belgium. His research interests lie in the area of optimising the overall efficiency of a system at system level by developing cross-disciplinary solutions that go back and question basic principles. He approaches these challenges holistically to the areas of computer hardware design for AI and sustainable energy generation.
Path tracking is an essential task for autonomous robots. Most path tracking is implemented with infra-red sensors that can not be applied for wider roads. Visual sensors like CCD cameras are used for the abovementioned cases. However, a visual-based path-tracking system is suffered from inconsistent luminance that introduce unstable inputs and noise. Such a control system needs to tune control parameters iteratively. This paper presents a PID-control system which is guided by Recurrent Neural Networks (RNNs) and Reinforcement Learning (RL) strategy for autonomous wheeled robots. Traditional PID control is effecive for steady states. However, paths explored by autonomous robots may be variable and insteady states. The input to the system is an image stream of tracks that are converted to gray-scale edges by a dual-filter. The degree of path curvature is calcuated as the input for the PID controller. PID parameters are learned with RNN. The LSTM (Long Short-Term Memory) mechanism in RNN incorporates hidden and gated cells that can memorize the change of states in a time-sequence without losing the time-state information. PID parameters thus can be learned even the states change with time. Additionally, RL is employed in the RNN learning process for automated error-correction. In this study, a three-wheeled robot is implemented with RNN and RL for visual path-tracking. The exerpiemental results show that tracking stability increases about 20% and navigation speed inceases about 40%.
Chih-Hung Wu received the M.S. and Ph.D. degrees in Electronic Engineering from National Sun Yat-sen University, Taiwan, in 1992 and 1996, respectively. He is currently a Professor with the Department of Electrical Engineering and the CEO of the Artificial Intelligence Research Center, National University of Kaohsiung, Taiwan. Dr. Wu is the director of ICAL (Intelligent Computation and Applications Lab.), National University of Kaohsiung, Taiwan. His research interests include AI, soft computing, robotics, GPS, and cloud-computing. He is a member of the Taiwanese Association for Artificial Intelligence, Taiwan Fuzzy Systems Association, and Taiwan Automation Intelligence and Robotics Association. He is also a member of the IEEE (CI, I&M, and SMC Societies).
Currently, robots mostly work locally with functions being executed on the robot hardware. But monofunctional, statically programmed robots are becoming increasingly inadequate. Robots evolving to become multi-functional, self learning, universal, cooperative machines. Robot brain goes cloud. Intelligent Robotics to create USP for the telecommunication companies by leveraging their assets, solving customers’ challenges and enabling of new services and business models. Complex robotics cause growing requirements for connectivity, computing and Security The complexity in tasks that will be carried out by robots will rapidly grow over the coming decades. The requirements to low latency will constantly increase with more robotic functions being executed in the cloud. In tomorrows fast changing environment, robots must have the ability, to quickly process information and adapt to new circumstances. I the last two years robot functions have been shifted to the cloud. Here the telecommunication compnies can position itself as an early mover robotics cloud provider with low latency capabilities. In a next step, robots will require more flexibility regarding their capabilities- Having established the robotics platform in an early stage, telecommunication companies can consolidate its position with a Robotics-as-a-Service offering to address B2B and B2B2C market. As an integrated network (5G) and IoT platform provider, telecommunication companies can take a central control point in intelligent cloud robotics business. To establish telecommunication companies in Cloud Robotics, multiple stages of the value chain should be targeted. Revenue could be generated through an Platform as service approach and strategic partnerships.
Julius Golovatchev is Head of PLM Competence Team at Detecon International GmbH (Deutsche Telekom Group) in Gologne, Germany. He holds Diploma in Mathematics and PhD in Economics. Julius has over 20 years of experience in the telecommunication industry specializing in innovation management as well as product lifecycle management. He is the author of numerous publications on the subject of innovation management and product development and often speaks at the international conferences. Julius is a foreign member of the Russian Academy of Natural Sciences (Branch of the Information and Telecommunication Technologies).
In recent years, there has been massive progress in artificial intelligence (AI) with the development of deep neural networks, natural language processing, computer vision and robotics. These techniques are now actively being applied in healthcare with many of the health service activities currently being delivered by clinicians and administrators predicted to be taken over by AI in the coming years. Today, robots perform vital functions in homes, outer space, hospitals and on military instillations. The development of robotic surgery has given hospitals and health care providers a valuable tool that is making a profound impact on highly technical surgical procedures. Innovative robotic surgical applications and techniques are being developed and reported every day. Increased utilization and development will ultimately fuel the discovery of newer applications of robotic systems in any kind of surgery. In my research, I have applied deep learning to classify and predict patients’ condition who are suffering with prolonged illness. It has been developed to assist nurses and doctors rather than dictate it. Because human decisions take into account more aspects than an AI system (until now!). So, in my talk, I will provide an overview of the history, development, and applications of robotics in healthcare.
I am working as a Senior Researcher at the University of Agder, Norway. In my 6 years of research career, I have published 20 papers. I have worked on interdisciplinary applications of Machine Learning. I am currently working on developing an eCoach application for athletes and coaches, which can help athletes in food intake and physical activity, whereas for coaches, it will guide in getting an overview of athletes. My research interests lie in CNNs, RNNs (basically all neural networks), and Computer Vision.
Higher order neural networks constitute an interesting approximation to increase discrimination and classifying capacities of feed forward - supervised learning neural network models, by avoiding the classical problems caused by a very large number of inputs or a very large numer of synapses (due to the complexity of the problem). This is achieved by means of increasing the computational capacities of neuron activation function or synapse mathematical definition. In this proposal a tool is presented. This computational tool has the structure of two function libraries. Every of the libraries is related to one of the two different higher order neural models that are presented. The first one is the GSMLP (for Gaussian Synapse Multilayer Perceptron). This is a higher order ANN where the computational capabilities of the network are increased (from the standard MLP) by having a non-constant synapse weight. The second proposed library constitutes the computational engineered implementation of Vectorial Input Higher Order Network (VIHON). This is a higher order neural network that has been designed for considering simultaneously the values of the color channels for every pixel. This allows the consideration of both color information and object shape information. This is a key factor in complex visual recognition tasks (i.e: mobile robots in unstructured scenarios). Both neural paradigms have been defined and designed by the first author of the proposal. The detailed explanation for both paradigms is been carried out. Also the definition of the different functions is going to be detailed. Results showing both how easy is to build neural based solutions and how good are the results are presented.
Juan Luis Crespo-Mariño received the B.Sc. and M. Sc. degrees in Physics from the University of Santiago de Compostela (Spain) in 1996 and 1999 and the Ph.D. degree in Systems and Automation Engineering from the University of A Coruña (Spain) in 2007. He is currently a Professor in Tecnológico de Costa Rica (Mechatronics Engineering Department), where he is the head of LIANA Lab (Artificial Intelligence for Natural Sciences Lab). He has participated in more than 60 research projects financed in international callings. His current research interests include biomedical active devices, higher order neural paradigms, signal and image processing, and autonomous robotics, among others.
Einstein Assistant is an AI Voice assistant for enterprises that enables users to "Talk to Salesforce". Users can dictate memos, update Salesforce records and create tasks using natural language. Einstein Assistant pioneers the use of Voice and Natural Language Processing (NLP) to enhance the user experience by reducing manual entry and increasing the timeliness and volume of data capture. In this talk, we will go through the high-level architecture and workflow starting from Automatic Speech Recognition (ASR) to using NLP for identifying entities and intents in a single dialog conversation text. Come to learn our practical approach to implementing a Voice Assistant and the unique challenges involved in integrating with enterprise data. We will discuss further opportunities to improve on our approach. For example, we look at adopting a more general multi-task learning NLP model (see decaNLP.com) instead of a single task model to enhance NLP performance
Vishal is a Member of Technical Staff at Salesforce, where he works on Einstein Voice Assistant, bringing the best in class NLP capabilities from Salesforce AI Research to production. Prior to Salesforce, he has contributed to research evaluating impact of lawyers voice in US Supreme Court case outcomes. He has also worked on predicting optimal distribution of bikes in New York City. He holds a Masters in Computer Science from New York University's Courant Institute of Mathematical Sciences where he spent his time studying Natural Language Processing, Computer Vision and more.