For service robots, it is important that autonomous decisions can be made onboard in real time based on sensory information in order to interact with human users and environment. Building embedded intelligence for service robots will enable the robots to achieve such capability. This talk will present some results from our research on embedded intelligence for service robots. It will focus on mobile assistant robots to help elderly people. The developed robot has three legs but can be reconfigured to two legs according to requirement, providing companionship and physically assist locomotion for its users. Algorithms have been developed to keep robot balanced under different situations. The interaction between human users and the robot such as a user’s movement intent, real-time human detection and tracking have also been implemented onboard. Usually AI and deep learning algorithms are implemented on high end computers with expensive GPUs. However, with the recent developments on hardware, some of the state of the art algorithms in deep learning have been implemented on the embedded boards on our developed service robot . Real-time tracking performance has been achieved by combining the feature extracted from light weighted CNN with the Discriminative Correlation Filter (DCF), implementing on Raspberry PI 3 and Movidius and achieving 7fps in human detection.
Dr Qinggang Meng is a Reader in Robotics and Autonomous Systems in the Department of Computer Science, Loughborough University, UK. His main research interests include robotics, unmanned aerial vehicles, driverless vehicles, networked systems, ambient assisted living, computer vision, AI and pattern recognition, machine learning and deep learning, both in theory and applications. He has a strong track record in publications and research projects in above areas. He is an Associate Editor for IEEE Transactions on Cybernetics.
Artificial Intelligence (AI) is the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages
AI is transforming humanity’s cerebral evolution as a replacement of repetitive habitual motions and thoughts. In its evolutionary process humans developed their primary biological interfaces to interpret the data that they were receiving through their five senses; seeing, hearing, smelling, touching, and tasting. Then in 1991 the Wild Wild West (www.) was born and sensory assimilation of data felt the first angst of a new medium.
Twenty six years later, more than 3.4 Exabyte of data is generated every day. This is comparable to a stack of CDs - from Earth to the Moon and back-each day! This onslaught of data is causing people a great deal of anxiety, stress and frustration. How will people handle big data and turn it into their personalized data to overcome the pressure of knowledge acquisition?
In recent years genetic algorithm neural networks(GANN) and natural language processing(NLP) have entered to provide, “Data into Knowledge” (DiK) solutions. Research with GANN and NLP has enabled tools to be developed that selectively filters big data and combine this data into microSelf-Reinforcement and personalized gamification of any DiK in Dynamic real-time. The combination of GA, NLP, MSRL & Dynamic Gamification has enabled people to experience relieve in their quest to turn DiK faster and easier
Until Mr. Elon Musk creates a plug in Matrix solution humans are stuck with a multi-sensor Knowledge Generator approach
Prof. Erwin Sniedzins has patented the Knowledge Generator™ (KG); a “MicroSelf-Reinforcement Learning, Artificial Intelligence, Personalize ‘Gamification’ of ANY digitized textual content application in DYNAMIC real- time. The KG technology enables people to turn Data into Knowledge (DiK) 32% better, faster and easier with more confidence and fun. No teacher or course designer is required. Erwin is the President of Mount Knowledge Inc, Toronto, Canada. The company is a global leader in AI, neural networks, automatic gamification of any textual data and reinforcement learning. Erwin has authored and published 12 books, Keynote speaker, Professor at Hebei University and Mt. Everest expedition leader
THE future research, innovation and development in the field of Automation and Robotics, in conjunction with the ubiquitous access to Internet, Information Communications Technologies (ICT), Smart Computational Devices (SCD), and Ultrafast Global Communication. The third millennium is a new era the Smart Cyberspace that is becoming pervasive in its nature while connecting the next generation of Ultra-smart Robotic Devices with the computationally powerful SCDs accessible to anyone, anywhere and at any time. IN support of Automation and Robotics, the telecommunications networks providers and SCDs developers, are working together to create much faster transmission channels with provision of higher quality of service for any multimedia content for anyone, anywhere at any time. The Human Machine Interface with high definition audio and video, facilitates seamless control of Smart Robotics & Computational Devices (SRCD), which are becoming a common technology in family homes, business, academic, and business, and industry worldwide. Today, SRCD are communicating via Robotic Internet and may be accessible to public and private customers, while storing important and to some extend confidential information in their memory. In case that SRCD may be lost, stolen or hacked into, the information stored in the memory could be abused, compromised or used for malicious purposes. In near coming future we may see the SRDC be used to aid, or to protect family residential areas, private homes, schools, hospitals, manufacturing plants, as well as, Cyber Physical Critical Infrastructures (CPCI) such as, atomic power and chemical plants, and large cities. The further research, innovation and development of Future Ultra-SRCD side by side with Future Ultrafast Robotic Internet, will require even more research, innovation and development in the field of Cyber Assurance and Security. Proper Safety and Security mechanisms and policies will become critical to protect the SRCD and COIP from any form of intrusion or cyber threads from anyone, from anywhere at any time. The author discusses the current and future trends of research, innovation and developments in SRCD, CPCI and Cyber Assurance, in conjunction with the Future Ultra-Fast Internet and Ultra-SRCD. The author promotes creation of multidisciplinary multinational research teams and development of Next Generation SRCD and Fully Automated Environment while utilizing Ultra-Smart Robotic & Computational Devices, in conjunction with the critical Cyber Safety and Assurance challenges for today and for tomorrow.
Professor Eduard Babulak, is accomplished international scholar, researcher, consultant, educator, professional engineer and polyglot, with more than thirty years of experience. He served as successfully published and his research was cited by scholars all over the world. He serves as Chair of the IEEE Vancouver Ethics, Professional and Conference Committee. He was Invited Speaker at the University of Cambridge, MIT, Yokohama National University and University of Electro Communications in Tokyo, Japan, Shanghai Jiao Tong University, Sungkyunkwan University in Korea, Penn State in USA, Czech Technical University in Prague, University at West Indies, Graz University of Technology, Austria, and other prestigious academic institutions worldwide. His academic and engineering work was recognized internationally by the Engineering Council in UK, the European Federation of Engineers and credited by the Ontario Society of Professional Engineers and APEG in British Columbia in Canada. He was awarded higher postdoctoral degree DOCENT - Doctor of Science (D.Sc.) in the Czech Republic, Ph.D., M.Sc., and High National Certificate (HNC) diplomas in the United Kingdom, as well as, the M.Sc., and B.Sc. diplomas in Electrical Engineering Slovakia. He serves as the Editor-in-Chief, Associate Editor-in-Chief, Co-Editor, and Guest-Editor. He speaks 16 languages and his biography was cited in the Cambridge Blue Book, Cambridge Index of Biographies, Stanford Who’s Who, and number of issues of Who’s Who in the World and America. • Fellow of the Royal Society RSA, London, UK; • Chartered Professional IT Fellow, Mentor and Elite Group Member of British Computer Society, London, UK • Invited Panel Member for National Science Foundation Graduate Research Fellowship Program, USA • Expert Consultant for CORDIS FP6 - FP7 European Commission, Brussels, Belgium • Mentor and Senior Member of the IEEE and ACM, USA • Nominated Fellow of the Institution of Engineering and Technology, London, UK and Distinguished Member of the ACM, USA • Chartered Member of the IET, London, UK • Professional Member of American Society for Engineering Education, American Mathematical Association, and Mathematical Society of America, USA
Nonlinear systems are typically linearized to permit linear feedback control design, but in some systems the nonlinearities are so strong their performance is called chaotic, and linear control designs can be rendered ineffective. One famous example is the van der Pol equation of oscillatory circuits. This study investigates the control design for the forced van der Pol equation using simulations of various control designs for iterated initial conditions. The results of the study highlight that even optimal linear, time-invariant control is unable to control the nonlinear van der Pol equation, but idealized nonlinear feedforward control performs quite well after an initial transient effect of the initial conditions. The key novelty is the hint that idealized nonlinear feedforward control is generalizable as a first design step benchmark.
Timothy Sands completed the PhD at the Naval Postgraduate School and postdoctoral graduate studies at Stanford and Columbia Universities. Dr. Sands is an International Scholar Laurette of the golden key international honor society, a Fellow of the Defense Advanced Research Projects Agency, DARPA, and a panelist on the National Science Foundation Graduate Research Fellowship Program. He serves on the editorial board of four academic journals and also is Associate Editor of the journal Robotics and Automation Engineering, having published research prolifically in archival literature and given several keynote and plenary presentations. Dr. Sands holds one patent in spacecraft attitude control.
The next generation of Internet of Things (IoT) devices will deliver expertise and adaptive command and control, beyond just providing information for higher level processing. The big data concept is that all of the connected devices will be producing information that is consumed by some higher-level system. The common perception is that as problems become more complex, large processing engines can only handle them, or that those problems require humans-in-the-loop to interpret the complex information sets. Robots used in manufacturing create efficiencies all the way from raw material handling to finish product packing. Robots can be programmed to operate 24/7 in lights-out situations for continuous production. Robotic equipment is highly flexible and can be customized to perform even complex functions. With robotics in greater use today than ever, manufacturers increasingly need to embrace automation to stay competitive. Automation can be highly cost-effective for nearly every size of company, including small shops. Automation allows domestic companies to be price competitive with offshore companies. Robotics in manufacturing achieve higher throughput, so companies can vie for larger contracts. Distributed control and supervisory control and data acquisition (SCADA) products have been used in industrial automation for many years. "Timesharing" and "cluster computing" are terms that have been associated with distributed computing. These terms are often used to define the technology-of-the-day that includes taking inputs, manipulating the data, and then distributing information to control actions or outputs. The evolution of technology that has resulted in the IoT market has been driven by the commoditization and re-distribution of resources. Each time a shift takes place; some marketer will create a new name and claim the market.
Hamed Fazlollahtabar earned a BSc and an MSc in industrial engineering from Mazandaran University of Science and Technology, Babol, Iran, in 2008 and 2010, respectively. He received his PhD in industrial and systems engineering from Iran University of Science and Technology, Tehran, Iran, in 2015. He has completed a postdoctoral research fellowship at Sharif University of Technology, Tehran, Iran, in the area of reliability engineering for complex systems in October 2016 to March 2017. He joined the Department of Industrial Engineering at Damghan University, Damghan, Iran, in June 2017. He is on the editorial boards of several journals and on the technical committees of several conferences. His research interests are in robot path planning, reliability engineering, supply chain planning, and business intelligence and analytics. He has published more than 240 research papers in international books, journals, and conferences. He has also published seven books out of which five are internationally distributed to academicians.
The rapid proliferation of hand-held mobile computing devices, coupled with the acceleration of the ‘Internet-of-Things’ connectivity, and data producing systems, such as embedded sensors, mobile phones, surveillance cameras, have certainly contributed to these advances. One of the fields in which scientific computing has made particular inroads has been the area of large scale data analytics and machine systems. In our modern digital information connected society, we are producing, storing and using ever increasing volumes of digital image and video content. How can we possibly make sense of all this visual-centric data? And how can we be sure that the derived computations and analysis are fully relevant to our human vision, understanding and interpretations. The current state of the art in machine vision analytics provide us with a variety of tools and methods to solve various classes of computer vision problems. We then are posed with the following questions - how big of a class of problems in vision are we able currently to solve, compared with the totality of what humans can do? Can we duplicate human vision abilities in a computational device? The objective of this talk is to highlight the latest advances in this research area for Machine Vision, and to provide novel insights of bioinspired intelligence. We will also present our recent research works and a synopsis of the existing state-of-the-art results in the field of Machine Vision and discuss the current trends in these technologies as well as the associated commercial impact and opportunities.
Sos Agaian is a Distinguished Professor of Computer Science at the Graduate Center/CSI, CUNY. Dr. Agaian was a Peter T. Flawn Professor of University of Texas at San Antonio. He has authored over 650 peer-reviewed research papers, ten-books, and nineteen edited proceedings. Dr. Agaian holds over 44 American and foreign issued or pending patents/disclosures. Several of Agaian’s IP are commercially licensed. He is an Associate Editor for several journals, including the Transection of the Image processing (IEEE). He is a fellow of IS&T, SPIE, AAAS, IEEE and a Foreign Member of the Armenian National Academy
Interest in scheduling problems is largely driven by the variety of areas in which these problems occur, such as project management, production, design, administration, operating systems, and industry issues. These problems are NP-hard in the complexity theory. That is why, approximate resolution methods have been proposed to resolve them, which are effective for the NP-hard problems, which allow to reduce the CPU time by sacrificing the optimality of the solutions, especially that exact methods cannot be used for large problems. Various scheduling problems are optimized, namely Resource-Constrained Project Scheduling Problems, flow shop and job shop scheduling problems with different variants and extensions (permutation flow shop, hybrid flow shop, flexible job shop, flexible job shop problem with transport robots, etc.). In recent years, distributed production systems are more common in practice because they can improve the quality of the product, reduce distribution costs and management risks. Scheduling in distributed systems is more difficult due to the hard decisions, in particular the assignment of jobs to factories and the scheduling of jobs in each factory. All these scheduling problems are tackled with new approaches based on multi-agent systems, metaheuristics and hybridizations.
Olfa Belkahla Driss received the B.Sc., M.Sc., and Ph.D. degrees in computer science from Institut Supérieur de Gestion de Tunis, University of Tunis, Tunisia, in 1997, 2000 and 2006 respectively. She is an Assistant Professor with the Department of Computer Science at Ecole Supérieure de Commerce de Tunis, University of Manouba from 2003. She is actually a supervisor of the Research Master in Computational Intelligence and Decision Making applied to Management from 2010. Her main research interests are in the field of industrial engineering, transport and production logistics, scheduling, artificial intelligence, multi-agent systems, constraint satisfaction problems, optimization, metaheuristics, hybrid algorithms. She has authored more than 60 research papers. She is member of technical committee of many international conferences and reviewer for many journals.
Reconstruction and extrapolation of multi-dimensional data represents the subject that arising in many branches of mathematics, artificial intelligence and computer science. Such a modeling connects pure mathematics with applied sciences. Artificial Intelligence similarly is situated on the border between pure mathematics and applied sciences. Our life and work are impossible without controlling, planning, time-tabling, scheduling, decision making, optimization, simulation, data analysis, risk analysis and process modeling. Topics: 1. Data interpolation with applications. 2. Data extrapolation in Artificial Intelligence. 3. Reconstruction of information. 4. Data restoration in optimization. 5. Probabilistic interpolation and extrapolation. 6. Modeling of processes. 7. Data simulation in controling. 8. Mathematical modeling.
Dariusz Jacek Jakóbczak was born in Koszalin, Poland, on December 30, 1965. He graduated in mathematics (numerical methods and programming) from the University of Gdansk, Poland in 1990. He received the Ph.D. degree in 2007 in computer science from the Polish – Japanese Institute of Information Technology, Warsaw, Poland. From 1991 to 1994 he was a civilian programmer in the High Military School in Koszalin. He was a teacher of mathematics and computer science in the Private Economic School in Koszalin from 1995 to 1999. Since March 1998 he has worked in the Department of Electronics and Computer Science, Koszalin University of Technology, Poland and since October 2007 he has been an Assistant Professor in the Chair of Computer Science and Management in this department. His research interests connect mathematics with computer science and include computer vision, artificial intelligence, shape representation, curve interpolation, contour reconstruction and geometric modeling, numerical methods, probabilistic methods, game theory, operational research and discrete mathematics.
Underwater robotics is the primary technology in future ocean exploration and inspection. Subsea oil and gas and deep-sea mining are currently the primary drivers for development in underwater robotics. Global ROV operational expenditure totaled approximately more than $14 billion. The subsea oil and gas sectors purchased about 50% more remotely-operated vehicles (ROVs) than defense, security and scientific research. However, the use of autonomous underwater vehicles (AUVs) has gain popularity due to its increased functionality and demand for floating oil production systems in recent years. This presentation focuses on the latest modeling and control trends of ROV in the marine applications that require precision and a certain level of autonomous for auto-pilot function. In addition to the requirement to have an accurate location for underwater robot deployment, a sufficiently specific model of an underwater robotics is vital for control system design. The hydrodynamic damping and added mass of an open-frame ROV as compared to AUV are harder to model for control purpose. An efficient modeling and simulation approach to obtain the ROV’s hydrodynamic damping and added mass coefficients using computer simulation software integrated into the early control system design stage will be insightful. A robust controller relying on the less accurate model can, therefore, be used to achieve acceptable target position and response.
Dr. Cheng Siong CHIN has completed his Ph.D. from Nanyang Technological University, Singapore. He is the Associate Professor at Newcastle University. He worked in electronics industry for few years before moving into academia. He currently holds 3 U.S. Patents. He has nearly 100 publications involving modeling, simulation, noise prediction, and control systems design of marine mechatronic systems. He is an elected Committee Member of IEEE Oceanic Engineering Society in Singapore, Lead Guest Editor for Journal of Advanced Transportation, IEEE Technical Committee Member on Biomechatronics and Bio-robotics Systems and External Reviewer for The Singapore Ministry of Education funds for R and D projects.
As technology systems grow more complex, issues of end-product equipment safety, ease of operation and reducing the risk of human error are becoming extremely important. Designers today know that the operational performance, efficiency, and safety of a wide range of systems from semiconductor fabrication equipment to mass transit vehicles are closely related to the interaction between humans and machines, the human machine interface (HMI). The selection and seamless integration of HMI components, such as switch controls, actuators and indicators, are critical to the success of equipment designed for human operation.
Human Machine Interface (HMI) or User Interface (UI) or Operator Interface Terminal (OIT) or Man Machine Interface (MMI) – encompasses hardware and software solutions for information exchange and communication between systems/machines and a human operator. HMIs enable control, management and/or visualization of device processes and can range from simple inputs on a touch display to control panels for highly complex industrial automation systems. HMIs can be found in multiple locations such as portable handheld devices, on machines, centralized control rooms, as well as factory floor machine and process control. Applications include industrial and building automation, digital signage, vending machine, medical, automotive, and appliances.
HMI applications require mechanical robustness and resistance to water, dust, moisture, a wide range of temperatures, and in some environments, secure communication. The development of a successful HMI solution for integration into a complex system relies to some extent on a balancing act. Front-end consideration must be given to the engineering and financial constraints placed on a project while the cost rewards to be gained from the investment also must be assessed. Employing high-quality design, best practices and proven techniques result in reliable HMI systems, such as complete control panel inserts, that reduce end product assembly costs and extend service life.
Prof. Sanjay joined GITAM, Hyderabad as Founder Director June 2009, certified Corporate Director - World Council Corporate Governance, UK, visited South Asian Countries for Research, presented papers in International Conferences, has 24 years of experience in Industry, Research and Teaching and had worked four years as a Associate Professor of Manufacturing Engineering at Government Universities in Malaysia and Singapore. He was a key note speaker for more than 57 international and national conferences and a member of advisory and technical committee for more than 90 international conferences and had successfully organized 65 international conferences / workshops. He is member, Editorial Board and Reviewer for more than 20 international journals and evaluated 16 doctorial theses in mechanical / manufacturing universities and he is guiding 5 scholars. He has authored 60 research papers. He is recipient of International awards - Academic Excellence, Research Excellence, Engineer, Principal, Academic Administrator, Education leadership and Engineers Educator
In the last few years we have been developing a brain-inspired approach that we call COSFIRE (Combination of Shifted Filter Responses). The idea is to configure a pattern-selective filter with the automatic analysis of given prototype/s. For instance, we demonstrate that by using a synthetic edge as a prototype we can configure an edge-selective filter, which uses as input the responses of center-on and center-off Difference-of-Gaussians filters, and that it is highly effective for contour detection. It turns out that the resulting filter is a computational model of real simple cells. It achieves more properties (e.g. push-pull inhibition) that are typical of such cells than the Gabor function model, and it also outperforms it in a contour detection task (Azzopardi and Petkov, BICY 2012; Azzopardi et al, PLOS ONE 2014). Similarly, by using a vessel-like pattern we demonstrate the configuration of a vessel-selective filter, which has been found highly effective for the delineation of the vessel tree in retinal fundus images (Azzopardi et al, MEDIA 2015 and Strisciuglio et al, MVAP 2016). The COSFIRE approach is trainable, in that its selectivity is determined from a given prototype rather than being predefined in the implementation. This trainable character makes the approach suitable to configure filters that are selective for more complex patterns, such as curvatures, junctions and irregular patterns. In particular, we demonstrate how curvature selective COSFIRE filters respond qualitatively similar to shape-selective neurons in area V4 of visual cortex of the type studied by Pasupathy (1998). In (Azzopardi and Petkov, PAMI, 2013) we showed how COSFIRE filters are highly effective for object localization and recognition in complex scenes, image classification and can also be used as shape descriptors. They are also tolerant to scale, rotation and reflection invariance. Further improvements in the configuration of COSFIRE filters have been proposed in recent works (Robles et al, ICPR 2016, Geçer et al, Image Vis Comput 2016). Moreover, we have used inspiration from the selectivity of TEO neurons and enriched the COSFIRE filters by adding an inhibition mechanism that improves the selectivity (Guo et al, MVAP 2016). Besides the applications mentioned above, COSFIRE filters have been found very effective in the following applications: classification of traffic sign images, gender recognition from face images, keyword spotting in handwritten manuscripts, recognition of handwritten digits, bifurcation detection in retinal images, recognition of electrical and architectural symbols, and a machine vision application. COSFIRE filters are conceptually simple, easy to implement and highly parallelizable. The filter output is computed as the product of blurred and shifted responses of lower level filters. They are versatile detectors of contour related features as they can be trained with any given local contour pattern and are subsequently able to detect identical and similar patterns. The Matlab code of the COSFIRE filters is available online: http://tinyurl.com/zxgr37k In this talk I will explain how we can configure and apply such COSFIRE filters and demonstrate their effectiveness in some applications. Finally, I will discuss the outlook of COSFIRE filters and possible directions for future research.
As of March 2015 I am an Academic Resident (Lecturer) at the Intelligent Computer Systems department of the ICT Faculty in the University of Malta. I am involved in lecturing courses about intelligent interfaces, pattern recognition and computer vision. I am also affiliated with the University of Groningen in the Netherlands where I co-supervise PhD and Masters students. Together with my collaborators from the Universities of Groningen and Wageningen (the Netherlands), I am the recipient of a ~EUR500k research grant, which we received from the Breed4Food program of STW in the Netherlands. The project is called SMARTBREED - Smart animal breeding using advanced machine learning. More details can be found in here. Before my current position I was a research innovator at TNO (4 days per week) and a post-doc researcher/lecturer at University of Groningen (one day per week), in the Netherlands. At TNO I was involved in optimization, signal processing, predictive modeling and computer vision projects. I received a PhD cum laude in Computer Science from University of Groningen (Netherlands) in April 2013. During my studies I developed novel trainable pattern recognition algorithms and published my work on high ranking peer-reviewed journals including IEEE Transactions on Pattern Analysis and Machine Intelligence and Medical Image Analysis. The thesis can be downloaded from http://www.cs.rug.nl/~george/#Downloads. In 2006 I received a BSc degree (first class honours) in Computer Science and in 2008 I received an MSc degree with distinction in Computer Vision, both from University of London. For the BSc degree I received an academic award for my outstanding achievement and for the MSc degree I ranked first. Between the year 2000 and 2007, I was a full-time software developer at Bank of Valletta (Malta) and for more than three years I was a scientific developer (part-time) at Iteanova Ltd.
Future of communication. How do chatbots can change our way of talking to each other. What is convoshpere and why it is so important. Our communication habits. What we are looking for? Chatbots as an ultimate tool for bringing back Alzheimer's patients back to social life. Chatbot in a clinical psychiatry. More than just a computer. Chatbots for business and how they can bring value to business being as an additional omni channel. How blockchain and chatbots technologies could be combined. What is Chabot on a blockchain technology.
George I Fomitchev is a CEO and a founder of Endurance. Serial entrepreneur. Futurist. Speaker on international conference about chatbots, messengers, AI, blockchain
We have come to expect most applications these days to push us just-intime information and suggestions based on our location, interests, history etc. Such applications are a feat of distribution. they dutifully synchronize with a multitude of other services and often request results obtained by clusters of machines churning through massive datasets. Many of Researchers strives to simplify the task of developing distributed systems and to improve the performance and quality of such systems. Big Data has become ubiquitous in modern society, but drawing insights from it remains a challenge due to its unprecedented degrees of heterogeneity, often compounded by inadequate experimental design. The past decade has seen considerable developments with big data algorithms, but signiﬁcant challenges remain for the area’s theoretical underpinning. Apache Spark is a powerful open source processing engine built around speed, ease of use, and sophisticated analytics. It is an in-memory distributed data processing framework, created to process a humongous data. It has seen rapid adoption by enterprises across a wide range of industries. It has quickly become the largest open source community in big data. Deep learning is a technique for automatically ﬁnding hierarchical patterns in large sets of data and has been used to achieve breakthrough advances in computer vision, machine translation, speech recognition, game playing, robotics and other applications in recent years. The recent progress and future potential of deep learning has led to immense interest and to its adoption by all large technology companies and research domain. Many customers process the massive amounts of data that feed these deep neural networks in Apache Spark, only to later feed it into a separate infrastructure to train models using popular frameworks, such as Apache MXNet and TensorFlow. Because of the popularity of Apache Spark and contributors that exceed a thousand, the developer community has expressed interest in uniting the big data infrastructure and deep learning into a single workﬂow under Apache Spark.
Importance of the topic:
The major focus of this tutorial is to create awareness for the faculty members, researchers and students in the usage of distributed deep learning on Apache Spark. Now, just by adding the BigDL library, Apache Spark becomes a powerful Distributed Deep Learning framework. In this tutorial, we are giving hands-on image recognition in Apache Spark framework with BigDL library. Since in today’s scenario proper knowledge management will involve distributed big data technologies, the proposed tutorial addresses the theme of the conference.
Prof. Dr. V. Vijayakumar is currently a Professor and an Associate Dean for School of Computing Science and Engineering at VIT University, Chennai, India. He has more than 18 years of experience including industrial and institutional. He also served as a Team Lead in industries like Satyam, Mahindra Satyam and Tech Mahindra for several years. He has completed Diploma with First Class Honors. He has completed BE CSE and MBA HRD with First Class. He has also completed ME CSE with First Rank Award. He has completed his PhD from Anna University in 2012. He has published many articles in national and international level journals/conferences/books. He is a reviewer in IEEE Transactions, Inderscience and Springer Journals. He has initiated a number of international research collaborations with universities in Europe, Australia, Africa, Malaysia, Singapore and North America. He had also initiated joint research collaboration between VIT University and various industries. He is also the Guest Editor for few journals in Inderscience, Springer and IGI Global. He also organized several international conferences and special sessions in USA, Vietnam, Africa, Malaysia and India including IEEE, ACSAT, ISRC, ISBCC, ICBCC etc. His research interests include computational areas covering grid computing, cloud computing, computer networks, cyber security and big data. He received his university-level Best Faculty Award for 2015–2016. He is also a member of several national and international professional bodies including EAI, BIS, ISTE, IAENG, CSTA, IEA etc.
The theory of evolution equations is a very important branch of Mathematics. This theory affects many models in different disciplines: physics, chemistry, computer science, biology, economics ... The vast class of artificial neural networks that has aroused the interest of many researchers and was the subject of their work. The evolution of the neural states of the recurrent models can be studied, in order to take the greatest advantage of the various possibilities of approaches offered by these models, including the speed of convergence. In addition, these models allow the resolution of complex problems of control, recognition of shapes or letters, optimization, decision, memorization. In this project, we will study the problem of Master-Response synchronization, stability and oscillations of time-delay systems, fuzzy/neural network modeling and the effect of delay on synchronization.
Dr. Adnène Arbi received his Ph.D. degree in Mathematics from Carthage University, Tunisia, in 2014. His research interests include robust control and filtering, stochastic systems, time-delay systems, and fuzzy/neural network modeling and control. He is currently an Assistant Professor with the Department of Mathematics and Computer science, the Higher Institute of Applied Sciences and Technology of Kairouan, University of Kairouan and member of Laboratory of Engineering Mathematics at Polytechnic School of Tunisia, University of Carthage, Tunisia. Adnène Arbi is author of more than 8 scientific peer-reviewed papers in international journals and conference proceedings, and he served as reviewer for many international conferences and workshops.
Offshore crane design requires the configuration of a large set of design parameters in a manner that meets customers’ demands and operational requirements, which makes it a very tedious, time-consuming and expensive process if it is done manually. The need to reduce the time and cost involved in the design process encourages companies to adopt virtual prototyping in the design and manufacturing process. In this paper, we introduce a server-side crane prototyping tool able to calculate a number of key performance indicators of a specified crane design based on a set of about 120 design parameters. We also present an artificial intelligence client for product optimization that adopts various optimization algorithms such as the genetic algorithm, particle swarm optimization, and simulated annealing for optimizing various design parameters in a manner that achieves the crane’s desired design criteria (e.g., performance and cost specifications). The goal of this paper is to compare the performance of the aforementioned algorithms for offshore crane design in terms of convergence time, accuracy, and their suitability to the problem domain.
Ibrahim has PhD in Robotics from Dept. of Engineering, Aarhus University, Denmark, PhD in Industrial Systems and Information Engineering from Korea University, S. Korea, MSc and BSc Control and Electronic Engineering from Menofia University, Egypt.
Ibrahim et al., The development of novel algorithms for mobile robotics navigation, Simultaneous localization and mapping, Obstacle avoidance and path planning, Sensor fusion, estimation and control, and optimization. I also have strong analytical skills and mathematical foundation with strong emphasis on algorithm development with Matlab.
We are in the middle of the 4th technological revolution and in artificial intelligence, human knowledge now doubles every 12 hours. That impacts everyone and everything and most people can't cope with the developments in their everyday faster changing lives. Too much information is available, and no one is able to qualify the information's value or classify and process all relevant information necessary to produce a well-informed and true fact-balanced personal decision. As a result, a faster growing number of people are unhappy, stressed and left behind, leading to an increase in obesity, other health problems and various social patterns, negatively influencing the balance in our societies. Scientists, researchers and entrepreneurs using AI technologies moving the “information” to a “solution” society. But as the world is getting more complex and has millions of options to offer, solutions are meaningful only if they are focused on the true individual needs. To make that possible, and in addition to the existing classifiers like socio demographics, age, heritage, gender or age, the AI community is producing markers like DNA, behavioral patterns, speech, face, head bone or body language analyses. I will talk about our experiences with multiple use cases, e.g. face patterns correlated to disease groups, a combination of speech and body language analyses to find the best actor talent for a role, real time facial feature emotion analyses to enable pain detection and pain management.
Heiko Schmidt has a master degree in science and engineering from Academy of Science and in music from Music University in Berlin, Germany. Heiko is a serial entrepreneur, has built companies in music, media, marketing and technology on 4 continents over a period of 25 years. He is now focused to help AI researchers and engineers to commercialize their AI algorithms. As CEO of Z21 Health, he develops passive health screening tools for early disease detection and guides patients to the best medical solution in the world.
This work shows a new perspective on emotion in artificial systems. Even when there is not an accepted general theory, it is clear that emotion helps to optimize the response of living beings in accordance with their situational state and the state of their environments. This ability provides living systems with a powerfull tool that improve their responses concerning their situational state, and they show a relevant proficiency in adaptiveness. However, this proficiency of adaptiveness is still a challenge in artificial systems. Since robots and artificial systems first integrated emotional skills some decades ago, major goals on behavior, decision-making, and communication have been reached. The methodology is based on the selection of a theory that explains the emotional features that concern a specific problem domain. Once computationally conceptualized the theory, the bounded set of emotional features are linked to a bounded set of processes within the system, concerning a bounded set of scenarios defined at design time. This isolation seems to be the cause of unsolvable lacks of knowledge when no defined scenarios arise since it seems to break the feedback loop by which to measure and decide. It seems as if biological adaptiveness is directly linked to the requirement of deploying the full range of processes that build the emotional phenomenon and not only isolated parts such as it occurs in artificial systems. This work argues that artificial emotion is the final result of a whole set of computational processes inside the system and that the full range of these processes allows for closing a feedback loop of emotion-based qualitative knowledge. This work argues the requirement of researching computational emotion, that is, the full set of finite means1 that might allow the emergence of artificial emotion in artificial systems.
Doctor in Robotics and Artificial Intelligence within the field of Artificial Emotion. Professor of Engineering and Project Management with more than 10 years of experience in Artificial Intelligence, Robotics and Systems Engineering. My research interest explores the fields of Artificial Intelligence and Computational Science applied to Complex Systems under the vision of Systems Engineering. Technical expertise in Computer Vision, Artificial Intelligence and computational algorithms, I am also experienced in European and National projects.
Despite recent rapid advances and successful large-scale application of deep Convolutional Neural Networks CNNs) using image, video, sound, text and time-series data, its adoption within the oil and gas industry in particular have been sparse. In this paper, we initially present an overview of opportunities for deep CNN methods within oil and gas industry, followed by details on a novel development where deep CNN have been used for state classification of autonomous gas sample taking procedure utilizing an industrial robot. The experimental results – using a deep CNN containing six layers – show accuracy levels exceeding 99 %. In addition, the advantages of using parallel computing with GPU is re-confirmed by showing a reduction factor of 43,8 for the training time required as compared with a CPU implementation. Finally, by analyzing the variations in the output probability distribution, it is shown that the deep CNN can also detect a number of undefined and therefore untrained anomalies. This is an extremely appealing property and serves as an illustrative example of how deep CNN algorithms can contribute towards safer and more robust operation in the industry.
Dr. Anisi received the Doctor- and Licentiate of Science degree in Optimization and Systems Theory and Master of Science degree in Engineering Physics, all from the Royal Institute of Technology (KTH) in Stockholm, Sweden. He is a Principal R&D Engineer at ABB and holds an adjunct position as Assosiate Professor at Mechatronics department at University of Agder (UiA), Norway thereby incorporating 9+ years’ industrial experience from autonomous systems and robotics from both defense- and oil & gas sector into his academic work. He has published ca 25 peer reviewed papers and 4 patents and has been serving as Program Committee and Robotics Expert in Norwegian Federation of Automation (NFA) since 2014.
Reinforcement learning (RL) is one of the types of machine learning algorithm which works on the principle of Reward (Positive Value) and Punishment (Negative Value). Learner is not told about how to learn, but has to discover on the way to earn more reward. The reward function acts as feedback mechanism as appose to a supervisor in case of supervised learning. It is one of the most active areas of research in Artificial Intelligence. The learning system which gets the punishment has to improve itself. So, the RL algorithms selectively retain the outputs that maximize the received reward over time. To accumulate a lot of rewards, the learning system must prefer the best experienced actions; however, it has to try new actions in order to discover better action selections for the future. Genetic Algorithms (GA) are metaheuristics based on evolutionary algorithms, which rely on the principle of natural evolution. These are stochastics search methods used for solving optimization problems initially founded by Holland. GA works on the coded structures of the parameter space. It uses population of structures and evaluates performance of each structure. Each structure in-turn used to generate new structure by using GA operators. Mutation operator used to introduce noise leading to addition of new genetic material from the existing structure, whereas Crossover operator is used to get new structural confirmation from combination of more than one structure. Reinforcement learning requires exploration of storage space and requires more space to remember the different explored outcome. GA helps in finding the goal by balancing the act of exploration and exploitation there by leading to better solution through better search.
Nitin Choubey has completed his PhD in Computer Science & Engineering (2014) and PhD in Business Management (2004) from Sant GadgeBaba Amravati University, India. He is the Associate Dean of SVKMs NMIMS, MPSTME, Shirpur, leading University for Engineering and Management in India. He has published more than 50 papers in reputed journals/conferences, published two books and has been serving as an editorial board member of many reputed Journals
Microbiorobotics in which biological components are harnessed for the actuation and smart applications on the microscale is a fascinating field that develops rapidly in the recent years. The integration of biological power sources such as spermatozoa lead to new promising approaches in diagnostics an therapy.1 Sperm-driven micromotors are useful devices for the controlled transport, guidance and delivery of single sperm cells. 2,3 In this talk, I will present why sperm cells are very useful as components of microrobots thanks to their motility and ability to sense environmental changes. I will also demonstrate how, by combining the sperm cells with smart materials, advanced autonomous microrobots can be developed.
Today, robotics is a rapidly growing field, the nano technological advances and continues, research, design, and creating new robots various practical purposes, whether domestically or militarily. Nanorobotics is the technology of creating machines or robots at or close to the scale of a nanometre (10-9metres). The enormous potential in the biomedical capabilities of Nano-Robots and the imprecision and less side effects of medical treatments today make Nano- Robots very desirable. Today, we propose for Nanomedical robots, since they will have no difficulty in identifying the target site cells even at the very early stages which cannot be done in the traditional treatment and will ultimately be able to track them down and destroy them wherever they may be growing. Nanotechnology as a diagnostic and treatment tool for patients with cancer and diabetes showed how actual developments in new. Manufacturing technologies are enabling innovative works which may help in constructing and employing nano robots most effectively for biomedical problems. Consequently they will change the shape of the industry, broadening the product development and marketing interactions between Pharma, Biotech, Diagnostic and Healthcare industries. Nano-robots are typically devices ranging in size from 0.1-10 micrometres and constructed of nanoscale or molecular components. The names nanorobots, nanoids, nanites or nanomites have also been used to describe these hypothetical devices. Nano-robots can be used in different application areas such as medicine and space technology. Nowadays, these nano-robots play a crucial role in the field of Bio-Medicine, particularly for the treatment of cancer, cerebral Aneurysm, removal of kidney stones, elimination of defected parts in the DNA structure, and for some other treatments that need utmost support to save human lives.
Prof. Sanjay joined GITAM, Hyderabad as Founder Director June 2009, certified Corporate Director - World Council Corporate Governance, UK, visited South Asian Countries for Research, presented papers in International Conferences, has 24 years of experience in Industry, Research & Teaching and had worked four years as a Associate Professor of Manufacturing Engineering at Government Universities in Malaysia and Singapore. He was a key note speaker for more than 57 international and national conferences and a member of advisory and technical committee for more than 90 international conferences and had successfully organized 65 international conferences / workshops. He is member, Editorial Board and Reviewer for more than 20 international journals and evaluated 16 doctorial theses in mechanical / manufacturing universities and he is guiding 5 scholars. He has authored 60 research papers. He is recipient of International awards - Academic Excellence, Research Excellence, Engineer, Principal, Academic Administrator, Education leadership and Engineers Educator.
With Artificial Intelligence (AI) emerging as the next digital frontier within the information world, it is revolutionizing the landscape of autonomous robotics. The success of deploying AI agents as a part of typical perception pipelines for real-world interaction requires self-learning. The bottlenecks in reflecting experience, intuition and emotional intelligence has recently enabled the application of data science to AI which has in turn been instrumental in advancing data science itself. But, by how far? In this talk, a data-centric discussion and understanding of AI adoption within autonomous robotics will be considered. Key factors include some concepts hidden in the messy data of the real-world, human interaction in rudimentary AI, promise and threat “general” and “narrow” AI, and data uncertainty and bias that may increase the complexity of adaptive weak AI leading to the emergence of unexpected behaviors in autonomous robotics. The talk shall review mitigation methodologies that evolutionary AI models should adapt to cope with such data-driven complexity in AI. The talk will draw key insights into the aforementioned challenges and mitigation strategies by considering specific use-cases from recent trends in autonomous vehicles and drones businesses.
Harish is an experienced data officer and an expert in Artificial Intelligence (AI)-powered data analytics with 15+ years of demonstrated history of working in aerospace, telecommunication, automotive, security, geospatial and biomedical industries. He is the Director of Artificial Intelligence and Chief Scientist at Linkay Technologies Inc. group of companies headquartered in the USA. Harish’s technological vision is to focus on building customized, innovative, yet practical AI technologies that provide perceptually meaningful insight on data for businesses. Prior to his current role, Harish was engaged on a consulting contract with The Sky Guys/Defiant Labs, Canada, as their Chief Data Scientist. Previously, he was a chief engineer at Samsung Electronics, India and an assistant professor/lead at the Visual Signal Analysis and Processing (VSAP) research center at Khalifa University, UAE. Harish has also worked as a researcher, managing the Research and Development (R&D) of Ministry of Defense (MoD) UK and European Union (EU)-funded projects at Lancaster and Manchester universities UK. Harish is a strong engineering professional with a Ph.D. in Computer Science and an M.Sc. in Autonomous Systems from Loughborough and Exeter universities UK, respectively. He has proven capacity to coordinate and spearhead the establishment of international research units and extensive management experience recognized through the successful completion of several public and private funded R&D projects in more than five different countries across the world. Harish is an ardent leader with the mission to focus on continuous growth, excellence, knowledge transfer and to cultivate a culture of international outlook and lifelong learning.
The LLC Bionic Natali company is a startup and has been engaging in creation of bionic artificial limbs of hands for more than 2 years, now company also try to cover topics of leg. From the first steps, the project had been directed on the solution of a problem of development of the domestic bionic functional artificial limb of the hand based on neural network and others algorithms. In the project it had been created the functional system of management, system of tactile feedback which has increased controllability of a functional artificial limb is already realized and integrated, and also the functional bionic artificial limb of the hand or leg. Based on this work it had been done the general representations and practical application of machine training, neural network and others algorithms. The technology of recognition of gestures of electromyographic activity based on neural network or an analog of network is the cornerstone. The bracelet is put on a hand (in case of disabled people, a stump), further noninvasive electrodes remove potential difference of neuromuscular activity; by means of an electric circuit there is data handling and their transmission to the processor where by means of a neural network there is a recognition of a gripper or movement of a knee, further data are transferred for control of a bionic hand or leg.
Ivaniuk Natallia Mikhailovna has completed her master degree from Moscow Institute of Physics and technology (Moscow University), Russia in 2008 year. More than 10 years she had worked in IT and strategical consulting in such companies as SAP, EY, SUEK, Norilsky Nickel and etc, later she has opened startup company what is based in Skolkovo. She is owner of two companies and CEO LLC Bionic Natali and deputy director LLC Bi-oN EMG. She is authors with other team members of 2 patients and also 2 patients in preparation now, she have more than 4 publications in different journals. Support of Commission of the Russian Federation for UNESCO is nominated LLC Bionic Natali to an award of UNESCO for support of physically disabled people by means of digital technologies (the UNESCO/Emir Jaber al-Ahmad al-Jaber al-Sabah Prize for Digital Empowerment of Persons with Disabilities) in 2016.
Artificial Intelligence will improve productivity, products and services, across a broad range of applications, all benefiting humanity. NVIDIA is researching all areas and working closely withresearchers, Enterprise and startups in both problem-solving and getting started. Alison's talk will briefly cover the HW and SW that comprise NVIDIA's GPU computing platform for AI, across PC to data center, cloud to edge, training to inference. The talk will also detail current state-of-the-art research &and recent internal work combining robotics with virtual reality and reinforcement learning in an end-to-end simulator for training and testing robots. The system demonstrates a combination of all of NVIDIA's graphics and deep learning expertise, physics solvers and advanced rendering
After spending her first year with NVIDIA as a Deep Learning Solutions Architect, Alison is now responsible for NVIDIA's Artificial Intelligence Developer Relations in the EMEA region. She is a mature graduate in Artificial Intelligence combining technical and theoretical computer science with a physics background & over 20 years of experience in international project management, entrepreneurial activities and the internet. She consults on a wide range of AI applications, including planetary defence with NASA and the SETI Institute and continues to manage the community of AI and Machine Learning researchers around the world, remaining knowledgeable in state of the art across all areas of research. She also travels and advises, teaches and evangelizes NVIDIA’s platform, around the globe.
Deep learning is one of the most exciting approaches in machine vision. Deep Learning, specially Convolutional neural networks (ConvNets or CNN) is a replication of how human brains perceive and understand an image. CNN(ConvNets) is a deep, feed-forward artificial neural networks that has many artificial neurons(nodes) at each of layers. Image classification is one of the most important tasks of machine vision. Image classification helps in object detection and other kind of pattern recognition.Tensorflow and other deep learning frameworks enable us to build applications for machine vision. It enables machine to identify objects automatically and quickly. Having described it, we can easily claim that deep learning, as an approach in machine learning is a great asset to Robotics.
Traffic proxied by a Layer 7 Gateway can be re-directed using intelligent algorithms and even dynamic, state-based awareness. This routing capability, which is called “ API-aware traffic management”, brings huge benefits in ensuring availability when connecting to multiple API instances in multiple clouds In layer 7 of network protocols’ stack load balancers are injected to sustain reliability and enforce optimizations, some of these balancers are intelligent traffic managers where application proxies are embedded to manage the traffic. Service oriented networking nowadays gains the most focus of concerns due to the emerged new networking architectures and platforms invaded the market. For example, windows azure is the outcome of Microsoft effort to bring distributed service oriented programming into the dominant zone. This research is investigating embedding secure intelligent load balancer in layer 7 where the services are offered to consumers and developers and the traffic is managed in a way to maintain security and privacy. here in this research work a plan is put together to overcome a vital challenge that faces building trusts for integration model over the cloud, where different vendors publishing is advertised over the cloud, and developers can invest in cost and effort by re-using these published services. At layer 7, organizations migrate their products and solutions for wide spectrum of applications to the cloud; this introduces new challenges in traffic management at the API level and to provide secure scheme for forwarding traffic among trusted web services.
Dr. Dhyaa S. Al-Azzawy has completed his PhD from University of Communication and Technology, Iraq. He is the dean of computer sciences and information technology college,. He has published more than 12 papers in reputed journals and has been serving as an editorial board member of repute.
In 1997, Guided Local Search (GLS) algorithm was proposed by Voudouris and Tesang to search and solve complex problems for the first time. In 2004, Webster called this method Gravitational Emulation Local Search (GELS) and was used as a powerful local search algorithm for solving complex problems. The main idea of GELS is based on gravitational force, which causes to attract objects with each other, such a way that heavy object has the higher gravitational force and attract low weigh objects. The attraction force between two objects depends on the distance between them. In GELS algorithm, possible solutions in the search space are divided into several categories according to their fitness's. Each of these categories named a dimension of the solution and there is initial velocity for them. Equation (1) computes the gravitational force between Current Solution (CU) and Candidate Solution (CA). This force (F) is added to the velocity vector in the path of current motion.
Current developments in numerical control, 3D technology, and polymers, have a great potential as regards irregular architectural shapes, while contributing to freeing people from hard physical labour, improving quality of performance, and minimising volumes of waste. Vertical components are erected automatically, while horizontal structures are engineered using prefabricated materials. The presented “stroptronic” technology uses automatic formwork that moves together with the robot embedding materials to construct monolithic ceilings of reinforced concrete. Quick-curing CSA cement-based concrete assures strength across its cross-section (over 20 MPa just after 1.5 h from reaction with water) on the entire length between the supports of the ceiling. The robot continuously places reinforcement and concrete to form a band, while moving together with the formwork perpendicularly to the ceiling length. Automatic formwork moves slowly under the ceiling made (approx. 1 cm/min.) to assure support for the curing concrete at the width of approx. 1.5 m, until the slab achieves self-supporting strength across its cross-section. The article presents the “stroptronic” technology for monolithic ceilings of reinforced concrete, which forms part of the “ja-wa” system construction methodology. The example of a ceiling with main steel bars and distribution bars of glass fibre presents functioning of devices controlled by a computer that continuously monitors movements of automatic formwork and robot placing reinforcement bars and concrete mix appropriately to the quick increase in concrete strength with growing cross-section of the ceiling. At the same time, the presented CSA-based concrete properties point to its application for automatic works.
Andrzej Więckowski, PhD and Assoc. at Cracow University of Technology (PK), Faculty of Civil Engineering. Lecturer at PK and AGH University of Science and Technology, Jagiellonian University and University of Economics. In 1989 – 1994: Project Manager for “M-04 low residential buildings”, and manager of Innovation and Implementation Unit for this prototype technology. In 1993, study visit in Canada and the USA. Since 2010, manager of “ja-wa” project at PK for automated wall construction by material injection, and since 2014, at AGH, specialist in extrusion concrete laying for ceilings. Author of four monographs and many papers in renowned scientific journals.
Appearance of system informational culture brings up the problem of education being treated as human change in his very nature. The challenge is how to develop rational part of human consciousness (CR) answering for semantic understanding. Cognogenesis in the field of natural sciences happened by means of mathematical description of laws and theories generalization. Any theory comprehension is reduced to obviousness of its meanings. With the purpose they must be descript and studied. Therefore the mathematical glottogonia ideas are of the paramount importance for intellectual processes explanation and modeling. Human consciousness has the transcendental ability for meanings apperception. They can be expressed on the language of categories (LC) allowing their subsequent investigation. The linguistic means possesses universal abstractions lying at the foundation of all mathematical theories. Method of universal tutoring (TU) is suggested based on the universalities identification. The universal education creates intellectual reality for semantic study and thus puts forward conditions for natural intellect (IN) self creation. TU must be continuous prompt and adaptive to every person. Only super computer artificial intelligence (IA) is able to help a student to cope with difficulties caused by necessity of sophisticated meanings perception. Natural intellect IN=CR and has as image IA. They use the same educational space in the form of electronic libraries. IA can search as required for relevant courses and represent them in the form of categories. Man and IA partnership with goal of man’s TU can be put into effect by means of personal ontological knowledge base.
Vasilyev N.S. has completed his PhD from Moscow Lomonosov State University, Russia and postdoctoral studies from High Productivity Computer Systems of Russian Academy of Sciences, Russia. He is the professor of Moscow Bauman State Technical University, dep. Fundamental Sciences, Russia. Fields of his interests are optimal control theory, optimization, theory of games, parallel computing, networks routing, rational education, natural intellect study and artificial intelligence. He has published more than 20 papers in reputed journals.
Human machine interfaces HMI and man to man M2M communication systems mediated by machine are multimodal. The speech Modality is a central element and can be combined with other visual or gesture modalities. The multiplication of information channels can improve the robustness and the effectiveness of the recognition system. In the framework of the development of these interfaces, it is automatically interpreted as a number of information related to the non-verbal communication like postures, facial expressions, etc. Audiovisual speech /speaker recognition has received significant attention in recent years. Speech or speaker recognition are not perfect especially in noisy conditions, the content of speech may be partly explained through the lip reading. Problems could be observed in visual recognition systems where the quality of the acquired image is poor; acquisition conditions as well as the variation in the facial expression affect the results of the systems. However, robust solutions for the simultaneous speech and speaker recognition employ several modalities such as: Word, lip texture, movement of the lips. Lip reading only generates ambiguities, since a number of sounds of speech are not visible and others have similar forms. With the use of gesture recognition, two phonemes of the same visual shape are associated with two different codes, presented by the hand pointing to a precise position on the face. The visual perception of the joint form of the lips and of the manual code allows the speech identification. In this context, several research attempt to check the contribution of several modalities such as gesture, facial expressions in the area of non verbal communication. We focus on the benefit of the visual modality in addition to the speech one since the visual signal can be used as a powerful source to improve the performance of the speaker recognition in noisy conditions. We have proposed the simultaneous use of two modalities: "acoustics" and "visual" to improve the scores of recognition in the absence or in the presence of noise. In addition, sign recognition is used to make sense to our verbal and non verbal communication by man and machine or between humans in general.
In this article, we present a dimensional synthesis of the Gough-Stewart platform, by taking into account the kinematic and dynamic performances as criteria of optimization. in order to do this, we first define the objective functions, constraints, and decision variables (geometric parameters). Secondly, we formulate mathematically the problem in terms of multi-objective optimization. The method of genetic algorithms is used to solve this type of MultiObjective Optimization Problem (MOOP) by means of NSGA-II.
As the development of modern manufacturing technologies, weld quality monitoring and seam tracking technology are most famous topic in automated robotic welding. In order to get good welding quality, penetration state and defect location must be controlled at a suitable level. In this paper, an experiment system based on pulse MIG robotic welding system with double microphone array and visual sensor has been established. The analysis of different noise is achieved by microphone array technology. A new denoising method – blind signal separation is used to reduce the environmental noise and extract the feature for double arc sound signal. After extracting features of acoustic signal, the relationship between welding defect and arc sound signal is established by support vector machine. Then in order to achieve the predicting of defect position with microphone array, the beamforming algorithm is used to identify the source of arc sound and track the generation of defect during pulse MIG welding process. The experimental results show that the prediction accuracy of defect recognition is up to 90% during 50 groups of experiment data. At the same time, the identification and location of the welding defect is achieved according to the time difference and location variation. The results have proved that the double microphone array had better recognition efficiency than single microphone sensor. This paper try to make a foundation work and provide a new idea to achieve welding quality control based on double microphone array for pulse MIG welding process.
Na Lv has completed her PhD from Shanghai Jiaotong University, China and postdoctoral studies from University of Western Sydney, Australia. She is the post-doctor of School of Naval Architecture, Ocean & Civil Engineering, Shanghai Jiao tong University, China. She has published more than 24 papers in reputed journals and has been serving as an academic assistant editor of Transactions on Intelligent Welding Manufacturing.
Modern industry is developing in the direction of ultra-precision and artificial intelligence. Meanwhile, more attention is paid to unmanned operation and stability. High-end equipments take an increasing part, so that more effective maintenance strategy is required accordingly. Especially for bearing, the core component as “heart” of an equipment, it is more than a money issue. Once the malfunction occurs, stopping the whole production line, tens of millions of economic losses could emerge. Both economically and practically, investigating the real-time monitoring of the bearing is of great significance. In this paper, a kind of intelligent bearing with online monitoring ability is proposed. It is embedded with sensors aiming to detect the potential faults in advance. The primal issue to address is the cable-based convention in power supply and signal transmission, which brings great limitations. Based on double-row cylindrical roller bearing, we proposed a wireless sensing system which can supply itself by rotation, realizing the online monitoring. In the interspace between double-row cylindrical roller, a pulse magnetization ring and a fractional slot winding are designed respectively on the inner ring and outer ring. Sinusoidal voltage whose amplitude and frequency is related to the rotational speed is generated. To avoid the shielding effect of electromagnetic wave caused by metal material, a novel microstrip antenna working under 2.4GHz is presented. Temperature signal, vibration signal, rotation signal and strain signal are fused and neural network is applied to diagnose fault by the feature vectors. By intelligentizing the bearing, periodic maintenance is turned to real-time control.
Yuanzhi Liu is a PhD student from Shanghai Jiaotong University, China after graduating with a bachelor degree from Harbin Institute of Technology, China. During the undergraduate study, he won the champion in the 18th Chinese Robot Competition. His main research direction is multi-agent robots system and online monitoring. He has applied for 3 patents of invention, and one of them is authorized
Modern day automation systems rely on fixed programming routines to carry out their routines. Effort is required to adjust the ongoing product routine through re-programming, re-design or complete overhaul of the system for new production capabilities. If a complete automated flexible system is introduced onto such a production line, the complete reprogramming process required for new products could be automated with limited loss in production time. Therefore, instead of reprogramming each new position the robotic system is required to move to, the system takes over real-time control of the robot and carries out the required steps autonomously. The benefit with the presented system would be the fact that the robot would not be required to be reprogrammed for every new routine and instead is controlled in a real-time environment to carry out new procedures based on external sensors (in this case image-processing sensors). Using a real-time system could remove the need for a fixed programming environment and replace it with an automated changing programming setup. This could result in a system automatically adapting to new product introduction through real-time machine vision processing techniques. The aim of this project is to use a visual aided system that can learn new product designs in real-time, allowing the robot system to adjust accordingly without the need for manual reprogramming of the robot. This in result can lead to an increased output in production in newly introduced products, while also having alternative implementation options.
Herman Vermaak obtained his PhD in Electronic Engineering at the University of Twente in the Netherlands. He has industrial experience in the manufacturing and assembly industry in particular the Motor and Tyre industry. He holds a professorship in Computer Engineering and is the Leader of the Research Unit in Evolvable and Manumation Systems (RGEMS). He has supervised more than thirty completed post-graduate studies and has authored/co-authored twenty-eight papers in journals and presented/co-presented more than thirty-five papers at International conferences. He has been serving as reviewer for a number of reputed journals.
A Wireless Sensor Network (WSN) is a self-organized wireless network system constituted by a large number of small autonomous entities that are scattered randomly in the environment. Sensor nodes, having a short distance from each other, gather information and transmit it to the base station. They are characterized by a short lifetime, a limited power and computational capacities, and memory. WSNs are potentially used in a wide range of civilian, industrial and military applications. The collaboration between WSN and mobile robots creates a range of advanced hybrid applications and improves the making of mobile sensor networks by applying the self-repositioning of self-deploying techniques. In fact, robotics can be used to solve many problems in WSN such as localizing nodes, acting as data mules, redeployment of nodes, detecting and reacting to sensor failure, aggregating sensor data, and providing mobile battery chargers for the sensors. Depending on the applications, a hybrid architecture for WSN is defined and mobile robots have to cooperate and coordinate their movement
Chérifa Boucetta is currently a lecturer at the department of computer science, University of Technology of Belfort-Montbélaird (UTBM, France) and researcher at OMNI/DISC Team of Femto-ST institute (Franche-Comté Electronique Mécanique Thermique et Optique – Sciences et Technologies). She received her Ph.D, M.sc and the Engineering Diploma in computer science from the National School of Computer Science, University of Manouba, Tunisia in 2016, 2010 and 2009 respectively. Her research interests focus on Wireless Sensor Networks (WSN), Vehicular networks (VANET), optimization, IoT, routing, clustering, energy consumption, coverage and connectivity, etc. She is member of technical committee of many international conferences and reviewer for many journals.
Humans are using memories, twisted or guessed facts and other implicit information stored or collected to reason about the most appropriate solutions in a given environmental conditions. They are adaptive instead of being reactive and this adaptation is happening through a constant interaction. Unlike humans, robots do not understand context by default and therefore they are mostly reactive. Deterministic chaos is a characteristic of the real world where the existence of living beings depends mostly on their capability to adapt to changes instead of controlling them. Compared to conventional approaches where robots are preprogramed to react on a finite number of environmental occurrences, the contextual awareness can enable modeling of humanlike adaptation skills. Computational models, as a focus of this talk, could be understood as context-to-data interpreters that transform (high-level or implicit) information into (low-level or explicit) data, allowing machines to make context-driven decisions. The basic model contains three main parts. The first part is used to track and collect significant environmental information following the principles of ubiquitous computing. The second part represents formal knowledge about the domain of interest. The model contains also a probabilistic component realized through Bayesian Network ensuring a single solution in a given context. The overall methodology will be demonstrated through three separate examples illustrating the reasoning based on: (i) phenomenon of social capital, (ii) human bodily awareness and (iii) human emotions. The design philosophy is focused here on the effects of the real human reasoning without defining the phenomenon itself.
Tomislav Stipancic has completed his PhD in 2013. He is an Assistant Professor in Robotics and Artificial Intelligence at the Faculty of Mechanical Engineering and Naval Architecture, University of Zagreb. Currently, he is a head of the Faculty’s Laboratory for Manufacturing and Assembly Systems Design. In 2016, he complited his JSPS postdoc research at the Kyoto University, Japan. He was also a visiting researcher at KTH (Stockholm) and URJC (Madrid). His research is chiefly focused on the areas of Cognitive Informatics, Affective and Probabilistic Robotics, Artificial Intelligence, and HRI. As an author or co-author he published more then 25 papers.
If the image of architecture has evolved since it existed, its principles remain the same; it is always about designing and raising buildings. However, since the end of the 1980s, its practice has undergone a profound change. The introduction of the computer and its logic of information processing in architecture, beyond a limited use to the simple visualization, deeply influenced this discipline, which tried to renew its codes and its language. Indeed CAD software can act on the structural properties of objects that is to say to manipulate mathematical models and not images. The designer no longer draws; he controls calculations and operations that constantly change the computer model, virtual model, depending on the experiments he performs. Our research takes into consideration another aspect of the architecture / ICT relationship by focusing on the implementation of an intelligent Cognitive Architecture, at the level of the rooms of patients hospitalized in a psychiatric environment, in a curative approach. Emotion is still a field little studied in Human-Computer Interaction (HMI). Yet man is a deeply emotional creature, relying on affect far more than logic or reason. The methodology followed in this research is defined by the correlation of two protocols: an architectural characterization protocol and a clinical evaluation protocol. These protocols consisted of conducting investigations at the Er Razzi psychiatric hospital in Annaba, Algeria. At first, interviews and interviews were conducted with the heads of services and the nursing staff as well as patients hospitalized in men and women's departments. Questionnaires were also distributed on nurses, psychologists and psychomotor therapists. In a second time, a collaboration with a computer scientist allowed us to set up a database from which we have established our ambient scenarios for the simulation of the defined situations. The architectural space that we tend to set up is a tool to help in the therapy of psychiatric patients, and a contribution and complementarity to the pharmacological aspect by its reflexive and dynamic nature. The generated atmosphere will interact with the psychiatric patient. Space thus evolves spontaneously according to the psychic and physiological state of the latter
A speech recognition system (ASR) in general, is a system capable of converting an acoustic signal produced by a speaker into a sequence of words. This type of tools is widely used in a variety of areas. The process of the creation of modern ASR has recently become a quite easy task to achieve and has strongly developed with the evolution of the tools used. However, this treatment is subject to the constraints of technological resources, such as the vocal corpus and text. An alternative solution is to take advantage of the phonetic structures shared between different languages to build an ASR for a target language. In this article we will come back to an experiment carried out in this respect. Indeed, we used an ASR of the Arabic language to create one for the Amazigh one. The originality of our work comes from the desire to approach this language, which has recently become an official language in Morocco, and which has little or insufficient resources for automatic recognition of speech. In addition, both languages share several phonemes as well as several words. The completed system achieved a recognition rate of around 73% per word. The potential and effectiveness of the proposed approach is demonstrated by experiments and comparisons with other approaches.
Ali Sadiqui is a PhD in computer science from the University Dhar El Mahraz FeZ, Morroco. He is currently a trainer-researcher at the OFPPT where he teaches courses in computer science. His researches'main concern incorporates the automatic processing of the Arabic language. He has published several articles in this field in indexed scientific journals.
Complicated illumination and weather conditions significantly decrease the robustness of robotic vision systems. This talk focusing on image pre-processing for robot vision in complex illumination and dynamic weather conditions. It will report cutting-edge models and algorithms in a systematic way from a viewpoint based on studying the atmospheric physics and imaging mechanism. This talk involves new fundamental models for illumination and scattering which may build a bridge between images and environmental lighting. It also includes some practical algorithms such as shadow/highlight removal, intrinsic image deriving, and rain/snow/fog removal. These technologies would enable robots to be effective in different lighting and weather conditions, i.e., ensure them all-weather operating capacity.
Professor Jiandong Tian’s research interests focus on robot vision in complicated illumination and meteorological conditions. In this field, he has published more than 50 papers in reputed journals and conferences. He gained the first level Liaoning Natural Science Achievement Award in 2010 and 2016. Prof. Tian served as Organizing Committee 2015 IEEE International Conference on CYBER Technology in Automation, Control and Intelligent Systems (CYBER 2015) and Technical Program Committee in CYBER 2016. He is a council member of Chinese Society for Optical Engineering (CSOE). He also is a peer-reviewed expert of ‘National Science & technology’ and Major Research Plan of the ‘National Natural Science Foundation of China’ in robot vision.
These years, emotion recognition has been one of the hot topics in the computer science, especially in Human-Robot Interaction (HRI) and Robot-Robot Interaction (RRI). By emotion (recognition and expression), robots can recognize human behavior and emotion better and it can communicate better. On that point is some research for unimodal emotion system for robots, but because, in the real world, Human emotions are multimodal, so multimodal systems can work better for this purpose. Beside this multimodality feature of human emotion, using a flexible and reliable learning method can help robots to recognize better and makes more reliable interaction. On the other hand, Deep learning showed its promising results in this area, and presented model here, is a multimodal method which uses three main traits (Facial Expression, Speech, and gesture) for emotion (recognition and expression) in robots. It is a cognitive based model which show how the Deep learning techniques can help to implement the cognition model better
Piezoelectric Ultrasonic motors (USMs) are new type of actuators that use ultrasonic level mechanical vibrations as their driving source. USMs have different construction, characteristics and operating principles than the conventional electromagnetic motors. USMs have important features such as; high holding torque, high response characteristics, high torque density, silent operation, no electromagnetic noise and compact size. USMs are particularly superior in high holding torque and precise speed/position responses. Thus, USMs have attracted considerable attention for servo-drive applications. Due to their features, USMs have recently begun to use for industrial, robotic, space, medical and automotive applications. This paper investigates using piezoelectric USMs in the robotic applications. Applications of the ultrasonic motors into the robotic application have been attempted in a variety of study, such as robot hands and surgical robots. By combining new robot control systems with piezoelectric USMs the researches proposed creating micromechanical systems that are small, cheap and completely autonomous. Different types of USMs for variety of applications have been examined in the present study. Three joints robot arm, five-gingered robot hand, tracked mobile robot micro-robot used in nanosatellite, eye robot system, elbow joint for robotic rehabilitation, surgical robot in magnetic resonance imaging environment, in vivo micro-robot application, active endoscope with multi degree-of-freedom ultrasonic motor applications are introduced as practice examples.
Erdal Bekiroglu received his MSc and PhD degrees from Gazi University Graduate School of Natural and Applied Science, Turkey. He is a professor and head of Department of Electrical&Electronics Engineering at Abant Izzet Baysal University, Turkey. His research interests are computer controlled systems, drive and control of electrical machines, piezoelectric ultrasonic motors, renewable energy systems, and superconductors. He has published more than 20 papers in reputed scientific journals.
Scotland has recently been affected by two unprecedented episodes of political uncertainty: Brexit (June 2016) and the Scottish referendum for independence (Sep 2014). To investigate the effect of political uncertainty in private investment I proceed twofold: firstly, I use an unsupervised machine learning algorithm to unveiled three types of unique policy related uncertainty indices displayed by Scottish news media, and secondly, I examine the impact of these indices applying a standard investment regression to a longitudinal panel data set composed by 700 Scottish firms. Scottish political uncertainty which covers around 10% of all news articles regarding economic uncertainty, increases steadily from when the UK approved the Scottish referendum for independence (Jan 2012) until the actual occurrence of it (Sept 2014). Moreover, Brexit uncertainty (4% of all economic uncertainty news) displays its pinnacle during Brexit referendum (June 2016), and the general elections of June 2017. Lastly, Scottish policy uncertainty (9 % of all economic uncertainty news) peaks during the heavy Scottish public sector strikes (Nov 2011) and Brexit (June 2016). The most conservative findings suggests that a one standard deviation increase in Scottish political uncertainty co-moves with a drop in investment rate of 15% the average firm investment rate and a 10% drop in investment rate as a result of a one standard deviation increase in Brexit uncertainty. Moreover, and contrary to the common wisdom, manufacturing firms do not appear to cut investment more abruptly as a result of a rise in political uncertainty (Brexit or Scottish political uncertainty) than their counterparts (non-manufacturing firms). These results are robust to the incorporation of a wide range of variables aiming at capturing investment opportunities such as cash-flows, sales growth rates, and GDP growth rates, alternative measures of uncertainty and election years.
In-line inspection (ILI) robot has been considered as an inevitable requirement to perform non destructive testing methods efficiently and economically. The detection of flaws that could lead to leakages in buried pipes has captured lot of attention on the oil, gas, and water resource industry. The difficulty in modelling a flaw detection system has been indicated as a result of the irregularity and randomness of defect. This work covers the study of non-destructive testing methods using fusion inspect sensors, LiDAR and Optic sensors, specializes in crack detection. Studies on ILI robot are also reviewed to construct an efficient gauge. Nevertheless, study on current support vector machine (SVM) technique is focused to apply for the LiDAR data; while convolutional neural network (CNN) is implemented on optic sensor’s data. The prototype robot successfully operated in lab scale environment in which the obtained data clearly described the status of inspected pipe. Sensors from the proposed list are also well integrated during the robot operation. For LiDAR data, experiments used clustering techniques to evaluate the feasibility to segment suspicious area. Among examining techniques, the performance of SVM indicated the promising result. On the other side, to classify raw image data from the optic sensor, a 7 layers CNN based model is constructed. During the model testing phase, sliding window technique is implemented to scan the testing images while generating 494 small windows. The CNN model is implemented to perform classification on each window. The testing result showed that the model has achieved 99.77% accuracy.
Le Dinh Van Khoa(MSc) finished Master degree in Computer Science from University of Nottingham in 2014. He joined MIMOS Bhd as a research assistant for several projects. He is currently a PhD candidate in the University of Nottingham, Malaysia Campus ZhiYuan Chen (PhD in Computer Science) is an Assistant Professor with the University of Nottingham, School of Computer Science in Malaysia and a Principal Consultant with MIMOS at the Accelerative Technology Lab. She received the MPhil and a PhD in Computer Science from the University of Nottingham in 2007 and 2011 respectively. Before joining UNMC, she has been a research associate in the UK Horizon Digital Economy Research Institute. Her research interests are in the area of computer science, machine learning, data mining, user modelling and artificial intelligence. (BSc) finished BSc degree in Computer Science from The University of Nottingham in 2016. He was working as a research assistant with Dr.Chen in July 2016. Currently an MPhil student with the Department of Electrical and Electronic Engineering, the University of Nottingham, Malaysia Campus.
RLfD in an environment whose state changes throughout the course of performing a task is very challenging. RLfD mostly has been successfully exploited only in non-varying environments to reduce the programming time and cost, e.g. fixed manufacturing workspaces. Non-conventional production lines necessitate Human-Robot Collaboration (HRC) implying robots and humans must work in shared workspaces. In such conditions, the robot needs to avoid colliding with the objects that are moved by humans in the workspace. Therefore, not only is the robot: (i) required to learn a task model from demonstrations; but also, (ii) must learn a control policy to avoid a stationary obstacle. Furthermore, (iii) it needs to build a control policy from demonstration to avoid moving obstacles. I will discuss the recent results of an incremental approach to RLfD addressing all these three problems. A series of experiments illustrates the effectiveness of the approach addressing the needs of fast programming by demonstration in a small- and medium-sized production lines.
Super resolution became one of the best techniques to obtain high resolution images as of a number of low-resolution images because of its simplicity and wide range of application in many fields of science and technology. There are several methods exist for super resolution but, in this paper iterative regularization method combine with neural network is chosen because of its high compatible result with less time including a very good user friendly approach. It takes care of the noise in the initial stage and gets a concrete result when neural network is introduced. In addition to the noise it controls the vulnerable parameters and get an highly super resolution image as compare to the literature
Data mining is the process of detecting or discover a new knowledge from the stored data set. Mining such knowledge are easier , if our dataset either text or numeric. Same extraction may complicated if you extract from multimedia data sets. From this complex data set it is really challenging task to the user. Reason for this various research paper and research projects are available. Last couple of years web based information have been increased . From this huge information, extracting the right content or information’s are becoming a challenge charge to research community. This growing field of multimedia ,video data play very important role in the field of video data mining. Video data are dynamic in nature due to this video data sets are properly arranged and indexed. Arranging of those data set reduce searching burden of the user and also retrieval process performance increased. This paper brings the new idea in the field of video data mining with help of hierarchical clustering technique.
Deep learning is one of the most popular approaches of today in Medical Imaging. Our recent efforts focused on the use of deep learning in lesion classification in the medical field. Particularly we work on skin lesion and capsule video endoscopy classification. We investigate into the role of deep features only, hand crafted features only, combined feature set, segmentation, more standard features such as SIFT in our work for increasing the accuracy of classification. In this talk, we will provide flavors from these various work together with describing the deep networks topologies used and effects of each of these aspects in the classification accuracy.
Sule Yildirim is associate professor at the Norwegian University of Science and Technology (NTNU) at the Department of Information Security and Communication Technology. Her main fields of research interests are artificial intelligence, application of machine learning in various fields, signal and image processing and biometrics. She has participated in projects funded by EU Horizon 2020, Eurostars and Erasmus+ programs, the Research Council of Norway, the Regional Research Council of Norway and the Ministry of Foreign Affairs, Norway. She belongs to the Norwegian Information Laboratory, Center for Cyber Information Security and the Norwegian Biometrics Laboratory. She has been supervising students at different academic levels over 20 years now and has been publishing more than 80 journal and conference papers in her fields of research. She also actively takes part as PC in conferences and acts as reviewer in several journals.
Our project has been carried out in the context of recent major developments in botics and more widespread usage of virtual agents in personal and professional sphere. The general purpose of the experiment was to thoroughly examine the character of the human–non-human interaction process. Thus, in the paper, we present a study of human–chatbot interaction, focusing on the affective responses of users to different types of interfaces with which they interact. The experiment consisted of two parts: measurement of psychophysiological reactions of chatbot users and a detailed questionnaire that focused on assessing interactions and willingness to collaborate with a bot. In the first quantitative stage, participants interacted with a chatbot, either with a simple text chatbot (control group) or an avatar reading its responses in addition to only presenting them on the screen (experimental group. We gathered the following psychophysiological data from participants: electromyography (EMG), respirometer (RSP), electrocardiography (ECG), and electrodermal activity (EDA). In the last, declarative stage, participants filled out a series of questionnaires related to the experience of interacting with (chat)bots and to the overall human–(chat)bot collaboration assessment. The theory of planned behavior survey investigated attitude towards cooperation with chatbots in the future. The social presence survey checked how much the chatbot was considered to be a “real” person. The anthropomorphism scale measured the extent to which the chatbot seems humanlike. Our particular focus was on the so-called uncanny valley effect, consisting of the feeling of eeriness and discomfort towards a given medium or technology that frequently appears in various kinds of human–machine interactions. Our results show that participants were experiencing lesser uncanny effects and less negative affect in cooperation with a simpler text chatbot than with the more complex, animated avatar chatbot. The simple chatbot have also induced less intense psychophysiological reactions. Despite major developments in botics, the user’s affective responses towards bots have frequently been neglected. In our view, understanding the user’s side may be crucial for designing better chatbots in the future and, thus, can contribute to advancing the field of human–computer interaction.
Railway transport traditionally uses a preventive maintenance system for the repair of rolling stock. Repairs are carried out within a specified time frame and involve disassembly, resource assessment, restoration or replacement, assembly and testing of individual components or the transport unit as a whole. The technology of repair involves the use of manual labor. Means of automation and robotics are practically not involved. It is understandable from the point of view of the correlation between the existing cost of labor and technical means. However, this ratio undergoes changes: automation means and robotic complexes develop and become more accessible while the cost of human labor increases. It allows to calculate, in an achievable perspective, the use of robotic means in enterprises repairing railway technics. One of the important technological operations of repairing railway equipment is the cleaning of individual components and parts after disassembly. The nature of their pollution is very diverse. They can also be caused by the influence of the external environment (dust, smoke, oil products, acid-alkaline properties) and the nature of the process in the mechanisms themselves (contamination by products of deterioration, deposits, corrosion). Until now, cleaning up of contaminants is carried out by washing with water with the addition of unsafe detergents. The temperature and pressure of the detergent solution are kept high enough, that makes this process very energy intensive. The use of robotic complexes for the implementation of technological operations for cleaning units and parts of rolling stock of railway transport in the process of its repair is proposed. There is a positive experience of using an automatic device for cleaning electric machines of locomotives . It uses a special manipulator to move the nozzle of the cleaning device, depending on the configuration of the surface to be cleaned. Further improvement of such technology is possible due to: - the use of automatic recognition of the object to be cleaned; - the possibility of changing the working tool; - the choice of a cleansing agent; - the possibility of changing the method of cleaning; - evaluation of the quality of cleaning. Prospects for further research in this area are facilitated by the rapid build-up of the capabilities of industrial robots, the emergence of effective chemical preparations with detergent properties, and new abrasive materials. The closest achievements should be libraries with electronic models of all parts of the rolling stock. It is also necessary to develop adaptive algorithms for controlling robotic complexes.
Vladimir Puzyr received his doctorate in 2006 and is a professor at the Ukrainian State University of Railway Transport, Ukraine. He is the director of the research and development center of tractive rolling stock, an organization that deals with the problems of improving technologies for repairing railway equipment. He has a significant number of publications in specialized scientific publications of Ukraine, a publication in the publishing house Springer as a co-author of the monograph, is a member of the editorial board of his university. Since 2016 he has been elected a full member of the Transport Academy of Ukraine
In field of robotics, Industrial robots are specially designed for performing repetitive tasks, tasks that do not change for considerable time period and can be programmed to bring in consistency, precision and augment production time-lines. Till date, companies have been successful in designing and manufacturing such robots that have multi-dimensional axis movements and have payload handling capacity that may extend beyond 2000 kg. Such robots have found applications majorly into automotive sectors and are making rapid strides into other sectors such as chemicals, mining and metals. Embedding Artificial Intelligence in Robots can create wonders for manufacturing or production systems and will be influential in setting a new precedence in the field of industrialization. Companies are making rapid advances in cognitive ability of robots, ability to recognize size, color, shape and material and then apply relevant processes that have been acquired from repeating such processes in the past will rocket the robots to next level. Traditional robot manufacturers have tied-up with companies developing AI solutions or platforms that will help them in generating & analyzing data from the processes and try to establish patterns, which will aid in increasing the efficiency of robots. Until now, we have had 3 eras of industrialization and now are in the 4th era of disruptive transformation, by the time we complete this phase, we would have been successful in eliminating humans from the factories carrying out routine jobs. Impact of new technology on both production systems as well as human labor is visualized to be dramatic. In western part of the world, where the labor cost is relatively higher in comparison to some of Asian countries, robots are set to replace such labor.
Rohan Salgarkar, Vice President – Sales is heading the entire sales operations of MarketsandMarkets. Rohan Salgarkar has over 13 years of experience in Sales and Business Development. He has built and is managing a strong sales teams to achieve MarketsandMarkets ambitious targets of growing the sales revenue multi-fold.
Rule-based data fusion for heterogeneous field sensory system is a challenging research paradigm till date, primarily due to the inherent characteristics in quantifying the output response of the system. The problem gets even critical when we need to deal with a limited number of elemental sensor-units in field, in contrast to traditional theories dealing with denser agglomeration of (identical) sensor units. In fact, field fusion models used hitherto have been found to be somewhat inappropriate for online detection of target-object(s), unless those are in a pre-assigned layout. Besides, the effect due to zonal distribution of the fieldsensors was largely unattended. In answering those lacunas, the present paper dwells on the modeling, algorithm and theoretical analysis of a novel fusion rule-base, in its refined form, centering on the zone-based relative dependency of the finite numbered field sensor-units. Besides, a new proposition is developed for assessing the decision threshold-band, signaling the activation of the field robotic gripper, using a stochastic model
Development of smart material using ionic polymer-metal composites (IPMCs) is a demanding area of research [1-2]. The IPMCs are now recognized to have potential applications in developing bio-mimetic sensors, actuators, transducers, and artificial muscles. The IPMCs offer several advantages such as bio-compatibility, low power consumption and miniaturization. We have been engaged in developing IPMC based actuators and sensors [3,4]. Recently we have reported results of the actuation and sensing studies of a five-fingered miniaturized robotic hand fabricated by using IPMC . Very recently, we have explored the possibility of using Nafion based IPMC for sensing the rhythm of human pulse and hear rate. In this talk the concept of a novel pulse rate sensing device is introduced exhibiting the proof-of-principle of the mechano-electrical functions of the device, namely IPMC film prepared by surface platinization of the ionic-polymer film.
Dr. Debabrata Chatterjee is former Head of the Chemistry and Biomimetics Group of CSIR-Central Mechanical Engineering Research Institute at Durgapur, India. He is now engaged as Research Advisor in the Vice-Chancellor’s Group at the University of Burdwan, Burdwan, India. His present research interests lie in the development of bio-inspired devices using electro-active polymers. He is an elected fellow of National Academy of Science, India (FNASc) and Fellow of the Royal Society of Chemistry, UK (FRSC). Childhood polio has left him physically challenged with a considerable mobility problem.
In this article, we developed a hybrid model by coupling a mathematical model(EBM) of the intra-host phase of Rift valley fever(RVF) and an agent-based model(ABM) of the interactions between animals, mosquitoes, pond water and climate. The mathematical model describes the dynamics of healthy cells, infected cells, virus populations and immune effector cells. The aim of this work is to create a virtual environment that EBM and ABM models allow us to follow the evolution of the RVF after animal infection. The other objective of this work is to minimize the storage capacity of the data used by all agents in the computer’s central memory. We present a mathematical analysis of the model. We calculate the basic reproduction number R0. We compute the equilibrium points and their stability. We present the results of the numerical simulations to illustrate and validate the theoretical results. The developed model can serve to evaluate the rate of infected cells and viruses in the organism of the animal at each step of the evolution of the disease.
Intelligence may be variously viewed as gift, art and science. Intelligence may be contrasted with intellect, that uniquely human capacity that develops through experience. Artificial intelligence implies human creation, e.g. robots, which were developed in order to simulate human behavior. Heart intelligence refers to physical, emotional and spiritual forms of intelligence, known to many cultures through the ages and recognized anew through interdisciplinary research pioneered by the HeartMath Institute. One recent finding supporting various meditative and wisdom traditions including Hinduism, Buddhism, Taoism, Christianity and Islam, is of a relatively autonomous “heart brain” with integrative hormonal, biophysical, neurochemical, electromagnetic and intuitive functions. It has been hypothesized that such integral functioning could be associated with a vital neurological shift in the mechanics of perception from binary dualistic to unitive, non-dual consciousness. Implicit in this hypothesis is coherent alignment with other dynamic, physiological, subsystems, not only including the prefrontal cortex of the “head brain”, with its associations with supposedly uniquely human forms of creativity, moral reasoning and executive action, but also wider social, ecological and global systems. The aim of this presentation is to review heart intelligence and the HeartMath Coherence model with special reference to their implications for artificial intelligence and robotics
Steve Edwards is currently an Emeritus Professor and Research Fellow in the Psychology Department of the University of Zululand. Qualifications include doctoral degrees in Psychology and Education and registrations in South Africa and the United Kingdom as Clinical, Educational, Sport and Exercise Psychologist. He has supervised over 100 masters and doctoral students, published over 200 scientific works and has served on many boards of national and international organizations. He has presented papers at international conferences in over 30 countries. Steve’s research, teaching and professional activities are mainly concerned with health promotion. For details see https://www.researchgate.net/profile/Stephen_Edwards11
Robotics is a cutting edge technology that is changing the future of the world. The world of tomorrow cannot be imagined without robots as its integral part. World population has increased to 7.6 Billion. Agriculture is the backbone of sustaining a huge population. More and more people are moving from rural to urban areas, reducing the number of hands required for large scale agriculture. Robots have the potential to help humans in improving agricultural yields from their farms. We have been experimenting with using robots in different areas of agriculture. Agriculture means and methods including spraying, disease and pest detection and irrigation systems are being revolutionized around the world. Emerging technologies are being incorporated to develop the primitive means. The main purpose is not only to increase the crop yield per hectare but also to improve the quality of crop along with the preservation of environment. Agricultural automation so far has revolved around making bigger and faster machines. But what is lacking in these machines is intelligence. Without intelligence, these machines would spray poison on areas where it is not needed, in turn getting this poison in the crop. So the aim is to make these automated machines intelligent so that they take action only where it is needed. Weeds are unwanted plants that compete with the crops in terms of light, nutrients and water. They also serve as a shelter for different type of pests. Manual weeding is one way of removal that is unviable due to cost incurred and the limited availability of human labour. Other alternative that is used extensively is uniform spraying which not only adversely affects the crop quality but also increases the production cost. We are presenting some image processing and computer vision techniques that can detect weeds in the rice and wheat fields. Using these techniques, localized detection can be achieved and precision spraying can be applied specifically to the weed infested crop area to produce healthy and economical yield. We have also worked on pest detection using normal and multi-spectral cameras. Our current work also includes automatic disease detection in crops. For this purpose, we have chosen a flower crop and successfully detected the Fungus disease. We have also introduced some solutions of fertilizer industry automation together with GE.
Dr. Yasir Niaz Khan obtained his Ph.D. at the University of Tubingen, Germany in 2013. During his Ph.D. he conducted research on automatic detection of terrain (ground surfaces) using a camera mounted on a flying and a ground robot. Upon completing his graduate studies, Dr. Khan started teaching robotics at FAST-NU, Lahore, Pakistan. He started a new robotics Lab at FAST-NU for robotics students to promote robotics in Pakistan. He supervised many national and international level robotic events held at FAST-NU where professors and students from different universities presented their works and participated in competitions. His projects at University of Central Punjab won many National Awards and 2 UN awards as well. Dr. Khan is now at The University of Lahore, the largest private university in Pakistan, and has started to extend his work further. He is Director of “Research Group on Robotics and IoT” that he established at the university. He is also involved in a project on Agricultural Automation for the 3 years funded by DAAD Germany in collaboration with University of Kaiserslautern Germany
We are here to see the status of AI and how in future this will adapt to help with different issue and we can choose the future we want to build. Thanks to AI. The other thing is to see how all of these different goals that we have the SDGs and UM can be tackled through AI. The future of work is where we adapt to the changes happening in the world which is made easier by AI and it changes the whole comfort level. The whole purpose of Anima is to build a postive AI. As an AI Xprize Amabassador, I have been selecting teamsinmy region in France and a few other European countires in order to find teams that could impact a billion lives through an AI application.
Alexandre Cadain is an entrepreneur exploring ways for a radical, positive and inclusive innovation through deep technologies, especially AI. Passionate about art and science, he developed an art gallery while studying at HEC Paris and Ecole Normale Supérieure. Committed to the idea that our New Renaissance should be about distributing the future evenly in the world, he went on a developing countries tour in 2013-2014 to investigate how social businesses could leverage new technologies to leapfrog. He co-organizes the Post-digital seminar at Ecole Normale Supérieure, which explores collective imagination for technology, specifically AI. At the intersection of art and science, their research targets Artficial Imagination for 2017-2019. He joined the XPRIZE foundation, working specifically on the IBM Watson AI XPRIZE as an ambassador, responsible for selecting and helping AI teams in Europe that could positively impact a billion lives by 2020. He is the co-founder and CEO of Anima, a moonshot studio designing solutions for massive, present and future challenges.
The tremendous growth of artificial intelligence has not only altered the way of living but has also brought about a panoramic shift in warfare scenario impelling the military to plan the stage for future battlefield. The locations of war might shift from ground and air to deep sea and outer space. This eccentric nature of the future battlefield calls for perspective planning for the nation's defence and alleges to focus the need to address the issues of changing nature of warfare. The non-linear battlefield of tomorrow would demand high levels of cognition for decision making and extreme precision targeting. In such a scenario, conventional mass weapons would prove to be vulnerable in the years to come. An intelligent weapon mechanism that requires zero human intervention and can do much more than just firing utilising technologies like lasers, microwaves, deep learning and computer vision for defence, surveillance and target detection appears to be a logical solution for future combat. A weapon cluster that uses collective memory and mimics the human brain with unattended sensors would provide the desired intelligence, scene awareness and friend-enemy identification capabilities. The existence of cyber space and satellites would facilitate the deployment of such weapon clusters in much more diverse and mobile environments. The system would create a complex war theatre beyond the ability of human’s reaction time. This paradigm displacement would result in an evolved model of warfare to deal with military effectiveness in future. However, the potentials of machine learning and artificial intelligence in this new realm of intelligence warfare may be constructive or destructive. Replacing human soldiers by machines is good as they reduce the casualties but bad as they would eventually kill the laws of war. It is beyond doubt that the potential of these weapons is far deadlier than their human counterparts and can be converted into indiscriminate death machines. On one side is the idea of using AI to make battlegrounds safer for humans but on the other side is the possibility to create a dynamic and frivolous killing machine at a surprisingy low cost.
Swati Johar is Scientist ‘D’ at Defence Institute of Psychological Research (DIPR), DRDO, Delhi, India. She is involved in many major research projects from an interdisciplinary perspective including the researches on target detection using deep learning techniques and computer vision. Her research work on gestures and speech recognition has been published in reputed International Journals like Springer and proposed to be integrated with the New Selection System being developed for the Indian Armed Forces. Emotion recognition and non-verbal behaviour are some of her areas of interest and she is the author of few book chapters dealing with human machine interaction and technological emergence. Recently, she has authored a book on ‘Emotions, Affect and Personality in Speech: The Bias of Language and Paralanguage’ that explores the various categories of speech variation and works to draw a line between linguistic and paralinguistic phenomenon of speech. She has published scientific articles in peer reviewed journals and conferences and has been serving as a reviewer of many reputed journals.
Collaborative Robots (Cobots) are increasing in manufacturing, but the cobots are rather coexisting than collaborative. In order for the cobots to be truly collaborative the right tasks, the right interaction and the right interface needs to be developed. Traditional industrial robots often have HMIs (pendants) not designed to be interactive and collaborative. In other automation fields like social robots and AI, this technology has existed for years but is not yet applied in the field of industrial robots. There are examples made like Baxter and Seyer from Rethink robots, but the robots are still not truly collaborative. One main issue is human-safety around these kind of robots that needs to be considered. The TS 2016- standard have 4 criteria for how a safe collaborative cell should be designed. Technologies like wearables could be used in order to create an interaction between the human and the cobot in order to design safety zones but also to be used for interaction and instructions for the human This presentation aims to present how the area of social robots and industrial robot could merge together in order to make the cobots truly collaborative in manufacturing. Furthermore how wearables like glasses can be used in order to create a safe environment for the human to interact and collaborate with the robot.
Åsa Fast-Berglund is an Associate Professor at Chalmers University of Technology. She is project leader for Stena Industry Innovation Lab (SIILab). Her research areas are within physical and cognitive automation within final assembly in production systems. She has published more than 90 papers in reputed journals and conferences.
With the exponential increase of ageing population on global basis in last few decades, the more efficient low-cost assisting systems are getting more inevitable to optimise social care works. Many countries around the world are facing ageing population. This fact inevitably necessitates an innovative development of more efficient low-cost ambient assisted living (AAL) monitoring system with the use of optimised infrastructure. Essentially, an AAL system comprises a number of subsystems requiring a multidisciplinary approach in terms of research, design, development, integration and deployment. The motivation behind an AAL system is to provide elderly or disabled people with affordable chronic health care monitoring facilities in their own homes and thereby promote well-being and independence. To realise this vision, the research community is currently actively working in various related fields such as sensor technologies (wearable, environmental, physiological, audio and video), activity identification and analysis of behavioural patterns for long-term predictive health-assessment analytics. Identifying human behaviour requires appropriate sensors of which there are of three main types: wearable, distributed environmental and vision-based.
Dr Armaghan Moemeni is a principal lecturer in games and intelligent systems in the school of computer science and informatics at De Montfort University. Her main research interest and expertise are in computer vision, computational intelligence, interactive computing as well as sensor fusion techniques. She is also an affiliated member of IEEE, IET and BMVA (British Machine Vision and Pattern Recognition Association). She is also responsible for organising research seminars in the Faculty of Technology, De Montfort University - Faculty of Technology Research Seminars (FOT-RSS). Armaghan reviews a number of IEEE conferences and journals - Examples are the Computer Journal/Computational Intelligence, Machine Learning and Data Analytics section and Journal of Ambient Intelligence and Humanized Computing An Intelligent Cloud-Based System for Ambient Assisted Living System.
Cognitive Computing and Machine Learning: Building Neural Conversational Agents through Chatbots. Analysing heuristics elements in Chatbot using pattern recognition and rule-based expression matching system • Discovering the potential of Chatbots as generative models to create natural responses • Reviewing AI competence in self-learning systems using data mining and pattern recognition
Vinod Ebinezer has total of 15 years of IT experience .. worked with fortune companies like Schlumberfer , JPMorgan . He has Engineering in Information Technology from Bangalore and Masters From Belgium ,
Nonlinear systems of a strong nature can be considered chaotic. One famous example is the van der Pol equation of oscillatory behavior. This study investigates the control design for the forced van der Pol equation using simulations of various feedforward control designs through the use of trajectory planning. Trajectory designs fed through an idealized feedforward controller can produce outputs trajectories not natural to the non-linear plant. The results of the study highlight that an idealized nonlinear feedforward control performs quite well after an initial transient effect of the initial conditions. Additionally, the study shows that the initial transients can be greatly reduced through a well-developed trajectory plan while simultaneously forcing the output to a desired circular state in the phase plane. Since analytical development is so easy for ideal nonlinear control, this article focuses on numerical demonstrations of trajectory tracking error
Mr. Cooper has completed a M.S. in Electrical Engineering and a M.S. in Aeronautical Engineering from the Air Force Institute of Technology, USA. and his MBA from the University of South Dakota, USA. He is a Deputy Program Manager for the Air Force Research Laboratory – Directed Energy Directorate, a premier research organization. His research focus areas are centered on non-linear feedforwad control, optical beam steering, and disturbance rejection techniques.
Natural language processing (NLP) is the knack of a computer program to fathom human language. NLP is a element of artificial intelligence (AI). Evaluation of several NLP applications is demanding as computers conventionally entail humans to speak to them in a programming language obligatory to be specific, unequivocal and exceedingly structured, or through a restricted number of clearly uttered voice commands. Human speech may not be consistently meticulous, it may frequently vague and dialectal structure can be contingent on many complex variables, including colloquial speech, provincial parlances and social perspective. There are several problems in NLP they are sentiment analysis which regulate the brashness or passionate reaction of a speaker concerning a particular topic. Instead of assigning one of the possible flags to each article, Document Classification can crack quotidian classification problem. In Machine Learning, a model is mandatory to envisage a sequence of words, which is alternative of a label that executes a breakthrough when deals with sequential data. Automatic text summarization model extricate only the most imperative parts of a text while preserving all of the meaning. In question answering, a model must apprehend a question, but it is obligatory to fathom a text of interest and distinguish precisely where to look for assembling an answer. Companies using AI which provides recommendations to localize nearest bakery stores, book a train ticket, order things online, etc. , Sentiment analysis during a governmental campaign to take cognizant resolutions by monitoring trending disputes on social media, Scrutinizing overlong assessments by customers of products on website, call centers using NLP to evaluate the criticism of the callers. This session will embrace introduction to NLP, prevailing problems in NLP, applications of NLP at several places.
Nikunj Tahilramani is pursuing his PhD from Uka Tarsadia University, Gujarat, India. He is currently appointed as a Chapter Secretary of Signal Processing Society, IEEE Gujarat Section, which is world’s largest technological body. He has presented and published more than 15 papers in reputed journals and International/National conference and has been serving as an editorial board member of reputed UC approved journals. He has guided more than 15 Post Graduate dissertations. Total 5 Post Graduate students are currently pursuing their research under his guidance. He is presently working as an Assistant Professor and Head of The Department in Electronics & Communication Engineering at Silver Oak College of Engineering & Technology Ahmedabad.
In the robotics community, significant study has been directed at the SLAM problem, in which the robot uses onboard sensor(s) to autonomously map its environment and then use the derived map to localize its position. The map is comprised of features from the surrounding environment. The feature sought indoors is often a wall, represented by a line or line segments. Several line extraction algorithms have been developed and compared already, though often the focus is on mapping vice localization. Localization is especially difficult due to the high dependency of parameterization in previous algorithms. Most attempts to deal with the inherent inaccuracy involve adding odometry from the wheels on the robot, which then places additional requirements on the robot and limits its applications to those environments supporting wheeled travel. The current research seeks to improve the results when using a single sensor. By separating localization performance and mapping performance, new algorithms can be developed to optimize each, with the priority becoming a user-defined input. Laser rangefinders provide improved accuracy and greater angular resolution than the ultrasonic sensors that preceded them. And with lower costs and higher availability, they can be incorporated into more applications than before. Novel approaches to the calculation, grouping, and scale of the Hough Transform parameters are incorporated into a new algorithm that provide near-real-time results used to improve the localization of the sensor.
Henry Travis received a B.S. in Electrical and Electronics Engineering from Oregon State University in 1993, and M.S. degree in Electrical Engineering from the Naval Postgraduate School in 2001. As a Surface Warfare Officer in the U.S. Navy, he served as a Military Instructor in the Space Systems Academic Group at the Naval Postgraduate School from 2008 to 2013. He is currently a government civilian at Information Warfare Training Command Monterey at the Defense Language Institute on Presidio of Monterey and a Ph.D. Candidate in the Department of Electrical and Computer Engineering at the Naval Postgraduate School.
Machine learning and robotics is at the top of developers’ priorities for 2018, with 69.4 percent stating that they’re building robotics apps and 48.7 percent of all developers indicating the use of machine learning in their projects. And it will continue to be a top priority. Why? Military contracts. Massive investments by a barrage of auto-manufacturers. Drone mania. Innovations quantum scaled by major robotics manufacturers. And a whole new wave of start-ups. Machines can learn in a variety of different ways and it seems like every day there’s a new stealth learning methodology in the works but Machine Vision involves more than just computer algorithms; engineers and roboticists also have to account for camera hardware that allow robots to process physical data. Robot vision is very closely linked to machine vision, which can be given credit for the emergence of robot guidance and automatic inspection systems. Robots can physically affect their environment. An influx of big data i.e. visual information available on the web (including annotated/labeled photos and videos) has propelled advances in computer vision, which in turn has helped further machine-learning based structured prediction learning techniques leading to robot vision applications like identification and sorting of objects. An example is anomaly detection with unsupervised learning, such as building systems capable of finding and assessing faults in silicon wafers using convolutional neural networks. Extrasensory technologies like radar, LIDAR, and ultrasound are also driving the development of 360-degree vision-based systems for autonomous vehicles and drones.
Ali Sina graduated from USC then completed his Masters from Pepperdine University. He is currently working on next generation AI solutions at UCI Applied Innovation lab. Ali is the Founder and CEO of Forkaia, a cognitive platform that matches data scientists and programmers with AI projects they find interesting. Ali is building the AI ecosystem of the future with a culture built around collaboration, experimentation and disruption and uses the ecosystem for constant experimentation and virtually incubates, accelerates and holds a portfolio of his own companies.
Globally, buildings are responsible for approximately 40% of the total world annual energy consumption. Most of this energy is for the provision of lighting, heating, cooling, and air conditioning. Increasing awareness of the environmental impact of CO2, NOx and CFCs emissions triggered a renewed interest in environmentally friendly cooling, and heating technologies. Under the 1997 Montreal Protocol, governments agreed to phase out chemicals used as refrigerants that have the potential to destroy stratospheric ozone. It was therefore considered desirable to reduce energy consumption and decrease the rate of depletion of world energy reserves and pollution of the environment. This article discusses a comprehensive review of energy sources, environment and sustainable development. This includes all the renewable energy technologies, energy efficiency systems, energy conservation scenarios, energy savings and other mitigation measures necessary to reduce climate change.
Robots are undoubtedly one of the the biggest breakthroughs in technology. Now that we already have robots as household helpers and workers, the next questions is: what is the future? The next step is to make robots understand our needs. For instance, you should be able to ask your robot to avoid a road because driving on the freeway unnerves you. Big Data and Cloud Computing technologies can make this possible. There have been many recent efforts towards integration of cloud services with robots. One of the most interesting ones integrates robots to cloud in such a manner that it can use YouTube for getting food recipes or learning tasks. Online services like Google Maps, YouTube and WikiHow have obvious advantages in helping a robot improve its capabilities and ‘learn’. Besides this, other applications like autonomous robot car with Cloud-provisioned GPS and robot exercise coach that can upload your data on cloud so that you can access it later on your smart devices, are also possible. One of the most critical and useful applications of Big Data and Cloud Computing lies in healthcare. These technologies have literally changed the way this sector functions with applications like personalized medicine and smart wearables. Robots have also changed healthcare immensely in sectors like surgery. However, if it is integrated with Big Data and Cloud Computing, they can take healthcare to heights we cannot even imagine. In this talk, we shall explore the research trends in fusion of these technologies and their future scope.
Samiya Khan is a Ph.D. Student in the Department of Computer Science, Jamia Millia Islamia, New Delhi, India since October 2015. Samiya received B.Sc. degree in Electronics and M.Sc. degree in Informatics from University of Delhi, India. She is currently working on ‘A Cloud-Based Framework for Big Data Analytics’. She has published papers in reputed publications like Information and Processing Management, Elsevier and presented her work in international conferences. Her research interests include Big Data, Cloud Computing and Data-intensive Computing.