Scientific Program

Keynote Talks

Abstract

This speech presents an approach of cross-platform and high efficient to build a real-time mobile web server for wireless sensor networks. Base the establishment of this mobile web server on the Web of Things (WOT) architecture. This mobile web server is a hardware module installed in an embedded system. The advantage of this mobile web server is that it is very easy to collect information, very confidential and security. This server can also upload data to the cloud at the same time, so it has the function of wireless sensor network system. When this system collects the signal data transmitted by these sensors, the system must first classify and store the information and then display the information on the monitoring page. The next step may be to upload the information to the cloud. Typically, connect the sensors to a network module with a network function, and then return the data to the control center unit. This research presented by the mobile web server also built on the same embedded hardware module, and even our system can be a private cloud data center. This system will visualize the data uploaded by the sensor, and draw the relevant historical data, and finally our system will perform big data analysis. Because our web page server built on an embedded operating system, it is easy for our system to model and visualize the corresponding graphics using the Python programming language. These drawings make it easy for researchers to compare and analyze. Finally, we will actually create an embedded system mobile web server, and for the current commercially available embedded hardware module in the establishment of mobile web server of performance comparison.

Biography

Wen-Tsai Sung is working with the Department of Electrical Engineering, National Chin-Yi University of Technology as a professor and Vice-Dean of Academic Affairs. He received a PhD and MS degree from the Department of Electrical Engineering, National Central University, Taiwan in 2007 and 2000. He has won the 2009 JMBE Best Annual Excellent Paper Award and the dragon thesis award that sponsor is Acer Foundation. His research interests include Wireless Sensors Network, Data Fusion, System Biology, System on Chip, Computer-Aided Design for Learning, Bioinformatics, and Biomedical Engineering. He has published a number of international journal and conferences article related to these areas. Currently, he is the chief of Wireless Sensors Networks Laboratory. At present, he serves as the Editor-in-Chief in three international journals: International Journal of Communications (IJC), Communications in Information Science and Management Engineering (CISME) and Journal of Vibration Analysis, Measurement, and Control (JVAMC), he also serves as the other international journals in Associate-Editor and Guest Editor (IET Systems Biology).

Speaker
Wen-Tsai Sung / National Chin-Yi University of Technology, Taiwan, China

Abstract

In the past decade, the number of mobile devices has escalated driven mostly by demand for bandwidth-hungry smart phones. Also the Internet of Things (IoT) which consists of interconnected common objects generate huge amount of data. Therefore, the need for efficient and reliable wireless communications has never been greater. However, the amount of radio spectrum is essentially limited, motivating perpetual search for efficient coding schemes. Although major advances have been realized in coding theory, wireless mobile systems remain highly susceptible to impairments in the radio channel, and the control of transmission errors continues to be a major research problem and practical concern for communications system designers. The recent trend in error detection and correction research is to implement the Hermatian Curves in the design of codecs. Hermatian curves based codecs have shown significant coding gains improvements as a single code as well as code components of Block Turbo Codes (BTCs) and Irregular Block Turbo Codes IBTCs. Those gains are found to be even higher in severe fading channel conditions while being scalable with the increase of the modulation index. Currently, there are no attempts to include Hermatian curves based codecs in wireless communication standards such as Orthogonal Frequency-Division Multiple Access (OFDMA) air-interface networks (HSPA and LTE) and IEEE WLANs standards such as 802.11g and 802.11n. The reason for this could be due to lack of studies in this field of research. Hence, further research could be carried out into extending current research results to be included in OFDMA and IEEE standards based wireless systems.

Biography

Jafar Alzubi has joined the Wireless Communications Research Laboratory at Swansea University (Swansea, UK) in 2008. He completed his PhD in Advanced Telecommunications Engineering in 2012. He is an Associate professor at Computer Engineering department, Al-Balqa Applied University; Currently, He is a visiting associate professor at Wake Forest University (USA). His research interests include Hermatian curves based codecs and cryptography. As part of his research, he designed the first regular and irregular block turbo codes based on Hermatian curves and investigated their performance across various computer and wireless networks. He has published more than 25 papers in reputed journals and has been serving as an editorial board member of repute.

Speaker
Jafar Alzubi / Swansea University Winston Salem, North Carolina USA

Sessions:

Scientific Sessions

Abstract

Data processing has roots in late 1950’s with the realization of the need for commercial data processing was felt after successful scientific computing results during and post Second World War period. After an intense effort during the next two decades, encompassing a whole gamut of computing science technologies, data bases came into their infancy and teenage. The next decade saw them become mature bringing in commercial data base products which could be used in various functional domains. The period also saw work in averaging functions, privacy, security and many other theoretical foundations (e. g. calendar, time null, values etc). Some sections of the database community felt that data base as a research field is dead and is no longer interesting. Then we saw the emergence of Expert systems with a big bang – promising exciting results. A lot of work started at many places, with conference, journals, courses & products aiming to redeem the promise. But that was not to be. We did not see much work in Expert Systems in the current millennium. Expert systems shells costing in millions of dollars failed to deliver. Data mining came as fresh air, with more promising applications during early 1990’s. After about 25 years and intense efforts, we are still yet to see good commercial products fulfilling the promise. There have been many papers dealing with different approaches and pointing out directions in which data mining products can become useful. We, in this talk propose the idea of Intention Mining which would solve many of these issues and make data fruitful. Intention Mining is a new paradigm making data mining parametric, users centric and avoids repetitive computations in many applications. It is quite important as data mining is computationally intensive job. There have been very few research papers dealing with it and more work needs to be done. The Big Data Analytics can really become powerful using this approach. Otherwise, we would ask the question again: Is data mining heading towards the expert systems life cycle (though may be of a longer duration).

Biography

Speaker
Shyam Kumar Gupta / IIT Delhi, India

Abstract

The anticipated demand for ultrafast ultrahigh bandwidth information processing, has provided tremendous impetus to realize all-optical information processing. Recent advances in enabling technologies that include, advanced optical modulation techniques and coherent detection, high-speed electronics (digital signal processing), nano and bio technologies in the design, synthesis and characterization of novel materials, structures and devices, with higher nonlinearities and higher efficiencies, and photonic integrated circuits, have opened up exciting new possibilities. The talk would provide a broad overview of recent advances for high performance, parallel and energy efficient all-optical information processing that include nanophotonics, plasmonics, organic and silicon photonics, photonic crystals and metamaterials, ultrahigh Q-microresonator structures, optical interconnects, compact femtosecond lasers, biophotonics and quantum photonics. The advent of conservative and reversible optical logic and photonic molecular, quantum and neuromorphic computing have also opened up new vistas for computing with light. The talk would focus on some of our recent experimental and theoretical results on all-optical switching in a variety of configurations, that include, graphene, organometallics, ultra-sensitive natural photoreceptor proteins, nanostructures such as, high-Q silica and silicon microring resonators and neuromorphic devices, to design a wide range of all-optical ultrafast Boolean, reversible and reconfigurable computing circuits that include, various logic gates, half-full adders-subtractors, counters, MUX/DE-MUXs, decoder-encoder, comparator, flip-flops, RAM and arithmetic logic unit (ALU). The talk would review the status of current technologies, highlight the importance of integrating them and identify important challenges that need to be overcome to realize the next generation information processing systems

Biography

Prof. Sukhdev Roy received his PhD from the Indian Institute of Technology Delhi, India. He has been a Visiting Scientist at many universities that include, Harvard, Waterloo, Würzburg, City University, London, Osaka and Hokkaido. He was the Guest Editor of the Special Issue of IET Circuits, Devices and Systems journal on Optical Computing, March 2011. He has published more than 175 papers in reputed journals and conference proceedings and is an Associate Editor of IEEE Access. He is a Fellow of the Indian National Academy of Engineering and the National Academy of Sciences, India, and a Senior Member of IEEE.

Speaker
Sukhdev Roy / Dayalbagh Educational Institute, INDIA

Abstract

We make the POC with a corpus of document having a size of about one terra byte. The corpus is composed of multilingual documents crawled form many social media resources. Clusters of related terms are generated from the corpus using deep-learning technics (word2vec). Those clusters are stored into a comma separated file and declared in Elasticsearch as a synonym file and will be use by the system when analyzing the queries. RESULTS: The quality of the system is improved by about 10 percent. But we should make a trade of between precision and recall by adjusting the size of the term clusters. This size is fixed in advance and passed as a parameter to the Deep-learning algorithm. We do not use synonyms in the indexing phase because doing that we increase the size of indices by about 20 times and may be more. To have good association between terms of the same cluster we must use a huge database (or corpus). One terra byte is a minimum size. More we have documents better we obtain good precision.

Biography

Nabil has completed his PhD from University of Paris XI. His research was about using Natural Language Processing in full text retrieval. He was from earlier dealing about using document structure to improve the quality of Information Retrieval Systems. He make some research about document automatic classification using their tables of contents. He also worked, for many years (20), as consultant in the domain of Data engineering. And since 2012 he is working as independent expert in the domain of big data and data science.

Speaker
Nabil Fegaiere / Paris Sud University

Abstract

One of the main challenges faced by modern society is to manage increasing population within urban areas. In this sense, the paradigm of Smart Cities and Smart Regions aims to provide adequate and sustainable environments, in which quality of life of its citizens as well as increasing participatory levels in decision making are essential. The operation and analysis of these context aware environments can be described by the interrelation of multiple systems, such as smart grids, multi-modal transportation systems, waste management, irrigation, health service and social service provision, among others. In order to adequately provide user-system interaction, multiple challenges are being faced in different domains, such as data acquisition and aggregation, real time user interaction, quality of service assurance in communication systems or interoperability, to name a few. In this presentation, the trends as well as several of the challenges, within real use cases will be described, with emphasis on the role of wireless communication systems in heterogeneous network operation.

Biography

Francisco Falcone received his Telecommunication Engineering Degree (1999) and PhD in Communication Engineering (2005), both at the Public University of Navarre in Spain. From 1999 to 2000 he worked as Microwave Commissioning Engineer, Siemens-Italtel. From 2000 to 2008 he worked as Radio Network Engineer, Telefónica Móviles. In 2009 he co-founded Tafco Metawireless, From 2003 to 2009 he was also Assistant Lecturer at UPNA, becoming Associate Professor in 2009. He has been Head of the Electrical Engineering Dept., UPNA since 2012. His research area is artificial electromagnetic media, complex electromagnetic scenarios and wireless system analysis, with applications to context aware environments, Smart Cities and Smart Regions. He has over 400 contributions in journal and conference publications. He has been recipient of the CST Best Paper Award in 2003 and 2005, Best PhD in 2006 awarded by the Colegio Oficial de Ingenieros de Telecomunicación, Doctorate award 2004-2006 awarded by UPNA, Juan Lopez de Peñalver Young Researcher Award 2010 awarded by the Royal Academy of Engineering of Spain and Premio Talgo 2012 for Technological Innovation.

Speaker
Francisco Falcone / Universidad Publica de Navarra Spain

Abstract

High Performance Computing and Data Analysis, aka Supercomputing, is a key and strategic capability for advancing science, research and economic progress for many decades. The competition to build the most powerful computers is now world-wide. In the last two decades, the advent of Cloud Computing has also become a key research and economic driver, albeit with different motivations and driving factors. There is overlap in the technologies that are used in HPCDA and Cloud, but also some unique technologies in each. Today, there is uncertainty about what types of problems can be best, and most effectivelydone, in these two environments. Clever programming makes is possible to do many problems in either environment, but not equally well. What are the differences and characteristics of these environments, and are there applications that need one or the other? What common challenges do the environments have, and what are unique challenges? This talk will try to put context for these debates and also propose ways to improve the understanding of when one environment or the other is best for productively solving Frontier Science challenges.

Biography

William T.C. Kramer holds a PhD in Computer Science from University of California at Berkeley, BS and MS in Computer Science from Purdue University, an ME in Electrical Engineering from the University of Delaware.He is Principal Investigator and Director for the Blue Waters Leadership Computing Project at the NCSA and is a full Research Professor in the University of Illinois at Urbana Champaign CS Department.Kramer was the general manager of the National Energy Research Scientific Computing Center (NERSC), at Lawrence Berkeley National Laboratory, worked at the NASA Ames Research Center at NASA's Numerical Aerodynamic Simulator (NAS) supercomputer center.

Speaker
William Kramer / University of Illinois at Urbana Champaign, USA

Abstract

The high order nuclear organization of mammalian genome plays significant roles in important cellular functions such as gene regulation and cell state determination. The influx of new details about the higher-level structure and dynamics of the genome from Hi-C and CHIA-PET technology requires new techniques to model, visualize and analyze the full extent of genomic information in three dimensions. While existing genome browsers have been proven as successful genome information management and visualization tools, these browsers are based on two-dimensional visual interface with limited capacity to represent structural hierarchies and long-range chromosome interactions, particularly across non-contiguous genomic segments. In this talk, the history, approach and progress to visualize human genome in time will be reviewed. Genome3D (http://genome3d.org), the first model-view framework of eukaryotic genomes will be discussed for its ability to enable integration and visualization of genomic and epigenomic data in a three-dimensional space. The physical genome model implicitly contains all levels of structure and hierarchy, and provides an underlying platform for integrating multi-scale genomic information within three dimensions. The viewer uses a hierarchical model of the relative positions of all nucleotide atoms in the cell nucleus. A game engine based Genome3D browser were further developed that has better performance, is platform independent and can be configured to allow users to access and visualize 3D genome models on a remote server. Genome3D software was further integrated with various genomic databases with, providing a multi-scale genome information visualization system to explore and navigate eukaryotic genome in 3-dimension. The new system, iGenome3D, provides a wide spectrum of tools, ranging from model construction to spatial analysis, to decipher the relationships between 3D conformation of the genome and its functional implication. The incorporation of literature allows users to quickly identify key features from PubMed abstracts for genes in the displayed 3D genome structure. The seamless integration of UCSC Genome Browser allows genetic and epigenetic features from the 2D browser to be visualized in 3D genome structure. iGenome3D can also output 3D genome model in various forms, including one that allows these models to be explored in modern virtual reality environments such as Oculus. Eukaryotic genomes can be analyzed from a completely new angle in iGenome3D that enables researchers to make new discoveries from a truly multi-scale exploration.

Biography

W. Jim Zheng, PhD, MS, joined UTHealth School of Biomedical Informatics (SBMI) on February 1st, 2013 as an associate professor and associate director of the Center for Computational Biomedicine. He spent most of his career in bioinformatics research in both industrial and academic settings. His research interests are eukaryotic genome information integration, modeling and visualization in three-dimension, large-scale biological data integration and mining for translational medicine. In his early career after being formally trained in both biology and computer science, Dr. Zheng worked on R&D projects in industry, conducting bioinformatics research in the area of functional genomics and data management, genome annotation, comparative genomics, gene discovery in disease-relevant genomic regions, and developing commercial genomic databases and bioinformatics software. Dr. Zheng and his colleagues developed Genome3D, the first model-view framework to integrate and visualize 3D eukaryotic genome. His current work also includes the development of novel data mining methods to extract useful information from biomedical literature for novel therapeutic strategy development against cancer and other human diseases. Dr. Zheng serves on the editorial board of several bioinformatics journals and is currently receiving funding from both NIH and NSF.

Speaker
W. Jim Zheng / University of Texas Health Science Center at Houston, USA

Abstract

We entered a new Data-centrics economy era with the widespread use of supporting Big Data infrastructures to deliver predictive real-time analysis and augmented intelligence in the three “P” sectors (Public, Private and Professional). Every economic sector will be drastically impacted. Such big-data infrastructure involves two major scientific fields : computer science for data integration (i.e. building a virtual or real Data Lake) and Mathematics for machine learning-ML- (AI for deep learning-DL-). This concept of Data Lake was first introduced in 1999 by Pyle.* Three types of Data are involved in a big data architecture : structured data (with predefined schema), semi-structured data (around XML with meta data) and unstructured data (no schema, no metadata). A Data Lake is a generalization of datawarehouse to semi-structured and non structured data. The data lake could be real (with pumping systems like in most data warehouses) or virtual with distributed large data sets.Today there exists no present SQL standard to manage a data lake with many proprietary proposals encompassing new key features like “external tables” within From-SQL clauses referring to N.O.SQL files. Expected use of a data lake is predictive real-time analysis by data scientists using a large variety of ML and DL methods generally in supervised, unsupervised ou reinforced modes; no interactivity exists among these methods.

Biography

Speaker
Serge Miranda / University of Nice Sophia Antipolis, France

Abstract

The Internet of Things (IoT) marks a new phase in the evolution of the Internet, because of the massive connectivity of end devices (sensors and actuators), and their interaction with the physical world. This imposes a new set of requirements on the already overused internet. While these developments promise huge benefits, they also offer major challenges. IoT will evolve organically and get transparently involved in most human activities. As the number of devices increase, large IoT deployments can be seen as a Ultra Large Scale Systems (ULSS), of which Autonomous Vehicles (AV) and smart cities are prime examples. ULSS present unique challenges to architects and developers not only because of their massive scale in sheer numbers but also because of their richness in terms of the diversity of possible scenarios. FOG computing is a recent paradigm that has been introduced to handle the scalability, connectivity and responsivity. Fog Computing is a highly distributed platform, with nodes located from near the end-user devices till the edge of the network. These nodes will offer resources such as computing, storage, and communication to the applications operating under this infrastructure. More recently Hierarchical Emergent Behaviors (HEB) was introduced to deal with the complexity, management and orchestration of the ULSS. During this talk, we will explore many areas like AV where the convergence of these approaches results in unique solutions.

Biography

Mario Nemirovsky is an ICREA Research Professor at the Barcelona Supercomputing Center since 2007. He received his PhD in ECE from University of California, Santa Barbara, in 1990. Presently he is conducting research in the area of IoT and HPC. He holds 64 USA patents. Mario is a pioneer in multithreaded hardware-based processor architectures. Mario authored seminal works on simultaneous multithreading. He founded Starflow Inc., Vilynx Inc, Miraveo Inc., ConSentry Networks Inc., Flowstorm Inc. and XStream Logic Inc. Previously, he was chief architect at National Semiconductor, Researcher at Apple Computers, and Architect Delco Electronics, General Motors.

Speaker
Mario Nemirovsky / Barcelona Supercomputer Center, Spain

Abstract

It has been more than a decade since study of Knowledge Management (KM) became a much-interested topic for scholars. Economies that are more traditional kept focusing on land, labor and capital as their main production factors, and saw knowledge as external to the economic process. Over the last few decades, economists started discussing about the role of knowledge and technology in economic growth. The resource-based theory of the firm recognizes knowledge as a new reproducible production factor and investments in intangible assets can lead to economic growth without needing any extra labour power. In this knowledge-based economy, knowledge is becoming the primary production factor, which essentially explains an organization’s sustainable competitive advantage. This macro-level observation has greater significance at a micro-level, by saying companies need to emphasize on knowledge in their business. Hence, knowledge management has become one of the hottest issues in literature on management at present.

Biography

Prof Iyer obtained his B.Tech. (Hons.) from the Indian Institute of Technology Kharagpur, M.S in Engineering from the University of Illinois at Urbana – Champaign, USA, and Ph.D. in Systems Engineering from the University of California, Davis, USA. Prof. Iyer’s Teaching and Research interests include: Project Management; Technology Management and Entrepreneurship; Business Systems Analysis; and Knowledge Management. He has written books on Engineering Project Management with Case Studies; Project Evaluation and Implementation; Strategic Management (all published by Vikas Publishing, New Delhi); and a Chapter on Bio - diesel Energy Systems and Technology, in a Book on Capacity Building for Sustainable Development published by the Third World Academy of Sciences, Trieste, Italy. Dr Iyer also has published over 48 papers in international and national journals and conferences.

Speaker
Parameshwar P. Iyer / Indian Institute of Science India

Abstract

An ontology development tool is often the first thing that people get to see when they venture into the Semantic Web field. Ontology editors and visualization tools therefore carry a special responsibility for the success of the Semantic Web community. At the same time, the user communities around such tools serve as melting pots which can be exploited to collect feedback on the overall design of the language and associated systems. Protégé is one of the most used development platforms for ontology-based systems. This paper report on the experiences of using Protégé with OWL. The intention of the Protégé and specially Protégé Plugin is to make Semantic Web technology available to a broad group of developers and users, and to promote best practices and design patterns. In this document we walk through a selection of these issues and suggest directions for future work and standardization efforts.

Biography

Salah-ddine Krit received the Hability Physics-Infromatics from the Faculty of Sciences, University Ibn Zohr Agadir morocco in 2015, the B.S. and Ph.D degrees in Software Engineering from Sidi Mohammed Ben Abdellah university, Fez, Morroco in 2004 and 2009, respectively. During 2002-2008, he worked as an engineer Team leader in audio and power management Integrated Circuits (ICs) Research, Design, simulation and layout of analog and digital blocks dedicated for mobile phone and satellite communication systems using Cadence, Eldo, Orcad, VHDL-AMS technology. He is currently a professor of Informatics with Polydisciplinary Faculty of Ouarzazate, Ibn Zohr university, Agadir, Morroco. His research interests include Wireless Sensor Networks (Software and Hardware), computer engineering and wireless communications, Genetic Algorithms, Gender and ICT, Smart Cities.

Speaker
Salahddine Krit / Ibn Zohr University Agadir, Morocco

Abstract

Different formal methods have been adopted in context-aware systems modeling and analysis. The applications development is still a challenging issue due to their adaptive behavior complexity and uncertainty features. A conceptual framework, as an ideal reuse technique, is one of the most suitable solutions to simplify the development of such systems and overcome their development complexity. We aim in this talk to define a framework that promotes the ability to specify and verify context-aware systems to assist and facilitate designer’s task. To this end, we propose a context-aware centered approach to present a view of a smart city as viewed from the different angles (Cloud, IoT, Big Data).

Biography

Professor Jalal LAASSIRI is an Associate Professor (HDR) at Laboratoire d’Informatique, Systèmes et Optimisation (ISO Lab), Ibn Tofail University, Faculty of sciences, Department of Informatics, Kenitra, Morocco. He obtained his Ph.D. degree in Computer Science (Doctorat en informatique), Area "Software Engineering Specification and Verification Systems" from the Mohammed V University - Agdal , Faculty of sciences Rabat, He joined Ibn Tofail University as an Assistant Professor in 2010. His main research interests involve Software Engineering Specification and Verification Systems, and their applications to real-world problems. In particular, he has been working on modeling, simulation, Access Control System Specifications and Systems Security.

Speaker
Jalal Laassiri / Université Ibn Tofaïl, Morocco

Abstract

In a world where technology is constantly evolving and competition is fierce, it is becoming necessary for companies to improve their products and to cover new business sectors in order to meet the new needs of their customers. Business Intelligence is considered to be an area that is becoming increasingly important among the areas of interest to customers. It focuses on decision support in order to anticipate trends by obtaining more information about the market. On the other hand, in recent years, a significant percentage of companies migrated their data to relational databases that are currently massively used. However, these are not adapted to decision-making tools because of the large volume of stored information, which makes the operating procedure complex. Among the business intelligence tools are those that rely on OLAP processes and provide a new set of data storage in a multi-dimensional structure called "Cube". They allow a new structuring of data that is subject-specific, relevant, useful, available when the user needs it most and is both understandable and targeted.

Biography

Marwa ISSA has completed her Engineering Degree in computer science from The Institute of Technology ESPRIT, Tunisia. She has obtained her certification as a Scrum Master from Scrum Alliance organization, Tunisia. She has obtained Systems Network certification from Cisco organization, Tunisia. She has worked on different projects using various technologies. She is software engineer for four years in Paris, France. She is in continuous and versatile research in order to help evolve the technological systems that she works on.

Speaker
Marwa Issa /

Abstract

Ontological concept evaluation is a difficult task. Till now, it is done either by domain expert or a knowledge base (thesaurus, ontology, etc.). In this research, we propose an automatic evaluation method based on a large web document collection, several context definitions deduced from it and three criteria. Our new Contextual method is used for the evaluation of the ontological concepts related to one specific domain and produced by clustering techniques. It is based on three revealing criteria that help the domain expert during the evaluation task. These criteria are:- The credibility degree: the character of what we can believe ; - The cohesion degree: the character of a thing that all its parts are united with a logical relationship between its elements and without contradiction ; - The eligibility degree: the character of a word that combines the necessary conditions to be elected as a concept since it is the most representative word of the cluster or the word that can orient the reasoning, the interpretation or the labeling task. Our method provides a support for either a domain expert or a novice user. Moreover, it facilitates the semantic interpretation of the word clusters and consequently the ontological concept generation. Our contribution is to propose an evaluation framework that does not depend on a gold standard, could be applied to any domain even if expert intervention is not available and provides qualitative and quantitative criteria. Our experiments show how our method assists and facilitates the evaluation task for the domain expert.

Biography

Dr. Lobna Karoui received the Master Degree on “intelligent systems” from the University ParisDauphine in 2004. Then, she started her PhD research in the University Paris-Sud Orsay and in the Electrical Higher School in Paris, France. Her research interests include the Artificial Intelligence domain, Machine Learning, cognitive science and semantic web. She obtained her PhD in 2008. Based on the Information technology background and Artificial Intelligence Research, she works on applied research to help companies on developing Artificial Intelligence projects for their business and customers such as semantic analysis in different functional domains, knowledge discovery, disruptive services, intelligent agents.

Speaker
Lobna Karoui / Université Paris-Sud Orsay, France

Abstract

Smart Home has become a real topical issue deserving more research and work. It is a kind of evolution that will change the habit of housing in the next years. It can provide a kind of easier and effective life style to people. The smart home/smart phone system is built by using technologies to control the electric devices and sensors (as temperature and PIR motion sensors). This paper aims to provide a model of Smart Home/ Smart phone system which is a low costeffective and flexible home monitoring. The proposed system may allow home functions to be controlled remotely from anywhere in the world using an Android app downloaded for a smartphone

Biography

Khaoula Karimi received the Engineer Degree in Software engineering from Faculty of Sciences and Technologies, Settat, Morocco, in 2015. She is currently a PhD student in Polydisciplinary Faculty of Ouarzazate, Department Mathematics and Informatics and Management, Ibn Zohr University Agadir, Morocco. Her research interests design and implementation of Smarthome/Smartphone systems.

Speaker
Khaoula Karimi / Ibn Zohr University Agadir, Morocco

Abstract

Cloud Computing is the most agressively growing computing model in the last decade due to convenience, flexibility, agility and methods of transforming enterprises operational reach. Cloud computing makes IT based scalable resource provisioning (i.e. compute, network, storage, memory, etc.) flexible and cost convenient. Cloud computing offers different architectures (i.e. public, private, hybrid and community) and services (IaaS, PaaS, SaaS) which need to be closely assessed by enterprises to align and understand their business model with the cloud architecture. Migrating inhouse IT based applications and services into the cloud may lead the enterprise susceptible to various risks such as: Governance, Compliance, Risk management, Data Control, applications performance, compatibility, failover, disaster recovery, etc. This is where Cloud Economics aims at merging the cloud and enterprises business model by analysing the internal and external variables affecting the cloud operations, management, cost, agility and growth. It also helps in merging the business strategy with the cloud model focusing on the critical success factors (CSFs) and key performance indicators (KPIs). These quantitative benchmarked metrics (KPIs) evaluate the threshold of further improvement, average or perfect state. Cloud Service Level Agreement is the only method to control the outsources IT based services and its Quality of Service. The SLA may describe the service performances at different level (i.e. compliance, insights, visibility, control, etc.). Integrating these enterprsies KPIs into the SLA may benefit the enterprise in being proactive, improved QoS and overcome the common cloud vendor lock-in issues and additional cloud based costs.

Biography

Lubna Luxmi Dhirani is a PhD student in the Department of Electronic and Computer Engineering at University of Limerick, Ireland. Her PhD research project is based on designing a System for Securing the Hybrid Cloud in a tenant-vendor-third party situation. She currently holds 4 publications supporting her PhD research. She has done MSc in Business Information Technology (2008) from United Kingdom and B.Eng in Computer Systems (2006) frim Pakistan. Lubna has worked as a lecturer for 3 years and taught various IT-based courses at SZABIST – Dubai, UAE Campus and ISRA University, Pakistan.

Speaker
Lubna Luxmi Dhirani / University of Limerick, Ireland

Abstract

In the era of globalization the commercial and mobility activities increased, leading to a huge demand of air transportation. At the north west corner of africa, The air transport sector makes a major contribution to the economy of morocco. To guide decision makers data analysis were widely used in several research in air transportation field and have provided useful results. In this paper we will analyze the commercial air traffic data in Morocco between 2012 and 2017 according to the geographical area, in order to facilate The crucial aim of Air Traffic Control (ATC) which is to increase both safety and capacity, so as to accept all flights without jeopardizing the life of the passengers or creating delays

Biography

Meryeme Hafidi received her Master’s degree in Air-traffic Management, from The Mohammed 6th International Academy of Civil Aviation Morocco; in 2016.she is actually an air traffic controller and a PhD student at Polydisciplinary Faculty of Ouarzazate, Ibn Zohr University, Agadir, Morocco.

Speaker
Meryeme Hafidi / Ibn Zohr University Agadir, Morocco

Abstract

The Design of an efficient routing protocol in the MANET environment is difficult because of its "short live" nature, and as the network topologies are dynamically changing due to Mobility and Mobile Nodes having limited battery power, routes mail fail at any time. There are several parameters which are normally considered when designing a routing algorithm for MANETs like; (i) Bandwidth utilization, (ii) Managing the dynamic topology, (iii) Managing routing overheads (iv) Battery constraints and (v) Security threats. Routing in MANET is a critical task and the first four parameters become very key when designing a routing algorithm. Due to the possibilities offered by MANETs in terms of future communication models, there is a need to develop a routing algorithm that is efficient in terms of route discovery, maintenance, and better power management. The problem of routing in MANETs has received a growing interest in research and there are so many opportunities a research in this area can open. An ideal route discovery algorithm should be able to trail the topological changes in MANETs and adapt the best route tree which addresses the changes in topology accordingly. The goal is to minimize routing updates by designing a Hybrid Genetic Mutation Routing Algorithm with Ad – Hoc on-Demand Distance Vector (AODV) and Ad – Hoc Multipath Distance Vector (AOMDV) being the base protocols for Ad – Hoc Networks. This will also help in reducing bandwidth and power resources consumption challenges in Mobile Ad hoc networks (MANETs).

Biography

Wilson MuangeMusyoka is a PhD Computer Science student at Jomo Kenyatta University of Agriculture and Technology, Kenya and completed a Masters in Computer Network Management from Middlesex University (London) in 2012. He is currently an assistant lecturer in the Computer Science Department at St. Paul’s University, Limuru, Kenya. He has keen interest in Mobile Networks, Distributed Computing, and Networking protocols generally.

Speaker
Wilson Muange Musyoka / St. Pauls University, Kenya

Abstract

Performance events or performance monitoring counters (PMCs) have been originally conceived, and widely used to aid low-level performance analysis and tuning. Nevertheless, they were opportunistically adopted for energy predictive modeling owing to lack of a precise energy measurement mechanism in processors, and to address the need of determining the energy consumption at a component-level granularity in a processor. Over the years, they have come to dominate research works in this area. Modern hardware processors provide a large set of PMCs. Determination of the best subset of PMCs for energy predictive modeling is a non-trivial task given the fact that all the PMCs can not be determined using a single application run. Several techniques have been devised to address this challenge. While some techniques are based on a statistical methodology, some use expert advice to pick a subset (that may not necessarily be obtained in one application run) that, in experts’ opinion, are significant contributors to energy consumption. However, the existing techniques have not considered a fundamental property of predictor variables that should have been applied in the first place to remove PMCs unfit for modeling energy. We propose to address this oversight in this talk. We present a novel selection criterion for PMCs called additivity [1], which can be used to determine the subset of PMCs that can potentially be used for reliable energy predictive modeling. It is based on the experimental observation that the energy consumption of a serial execution of two applications is the sum of energy consumptions observed for the individual execution of each application. A linear predictive energy model is consistent if and only if its predictor variables are additive in the sense that the vector of predictor variables for a serial execution of two applications is the sum of vectors for the individual execution of each application. The criterion, therefore, is based on a simple and intuitive rule that the value of a PMC for a serial execution of two applications is equal to the sum of its values obtained for the individual execution of each application. The PMC is branded as non-additive on a platform if there exists an application for which the calculated value differs significantly from the value observed for the application execution on the platform. The use of non-additive PMCs in a model renders it inconsistent. This study will further be used to improve energy modeling for modern complex architectures and to improve optimization techniques and design space exploration [2].

Biography

Speaker
Arsalan Shahid / University College Dublin, Ireland

Will be updated soon...

Will be updated soon...


Change Color