Tutorials

To access the open-access Sunday Program, click here: https://fuzz-virtual.org/schedule

To help attendees choose which tutorials may be most suitable for them, each tutorial below provides a ‘chilli-rating’ as follows:
🌶: Suitable for undergraduate and interested high school students and equivalent, no prior expertise required.
🌶🌶: Most useful to attendees with some background in X.
🌶🌶🌶: A well-developed understanding of X  is required to benefit fully from this tutorial.

A Hands-on Tutorial on Apache Spark for Solving Data Science Problems with Fuzzy logic

🌶: Some background on programming in Python is required, and knowing something about fuzzy modelling.
OrganisersIsaac Triguero, Alberto Fernández, Mikel Galar.

A Top-Down Approach to Rule-Based Fuzzy Systems

🌶🌶: Most useful to attendees with some background in rule-based fuzzy systems
Organiser: Jerry Mendel.

Besides fuzzy sets: new theories for promptly moving your crisp systems’ full power to the fuzzy side.

🌶🌶: Background on Fuzzy Set Theory basics makes it easier to understand the tutorial.
Organiser: Daniel Sanchez.

Building interpretable fuzzy inference systems in Python

🌶🌶: A little prior knowledge about python and fuzzy logic is assumed; also, attendees are expected to already know why “explainable” systems are useful.
Organisers: Uzay Kaymak, Marco Nobile, Simone Spolaor, Caro Fuchs.

Explainable Fuzzy Systems: Paving the way from Interpretable Fuzzy Systems to Explainable Artificial Intelligence

🌶🌶: Most useful to attendees with some background in Explainable Artificial Intelligence.
Organisers: Jose Alonso, Ciro Castiello, Corrado Mencar, Luis Magdalena.

Fuzzy Systems for Brain Sciences & Interfaces.

🌶: No prior expertise required.
Organisers: Javier Andreu-Perez, Ching Teng Lin.

How big is too big? Clustering in (static) BIG DATA with the Fantastic 4

🌶🌶: TBC
Organiser: James Bezdek.

New Optimization Techniques for TSK Fuzzy Systems: Classification and Regression

🌶🌶: Most useful to attendees with some background in fuzzy system training.
Organiser: Dongrui Wu.

Recent Developments in Artificial Intelligence and Fuzzy Systems

🌶🌶: Most useful to attendees with some background in Artificial Intelligence and Fuzzy Systems.
Organisers: Alexander Gegov, Uzay Kaymak, Joao Sousa.

Towards Trusting AI/ML Decision-Making of Autonomous Systems <Cancelled>

🌶🌶: Most useful to attendees with some background and understanding of AI/ML and fuzzy set theory being helpful.
Organiser: Stanton Price.

Uncertainty modeling in adversarial learning

🌶🌶: Most useful to attendees with some background in machine learning and fuzzy sets.
Organiser: Xi-Zhao Wang.

Capture and Handling of Uncertainty at Source – Using Intervals in AI and why it matters

🌶: While familiarity with data analysis and statistics (regression specifically) is valuable, no specific prior knowledge is required to benefit from the tutorial.
Organisers: Christian Wagner, Vladik Kreinovich, Shaily Kabir, Zack Ellerby.


More Details

A Hands-on Tutorial on Apache Spark for Solving Data Science Problems with Fuzzy logic

Organisers:  Isaac Triguero (<isaac.triguero@nottingham.ac.uk>), Alberto Fernández (<alberto@decsai.ugr.es>), Mikel Galar (<mikel.galar@unavarra.es>).

In the era of big data, the leverage of recent advances achieved in distributed technologies enables a novel scenario known as data science, whose main goal is to discover unknown patterns or hidden relations from voluminous data in a faster way. Extracting knowledge from big data becomes a very interesting and challenging task where we must consider new paradigms to develop scalable algorithms. However, computational intelligence models for machine learning, including those that consider fuzzy logic, cannot be straightforwardly adapted to the new space and time requirements. Hence, existing algorithms should be redesigned or new ones developed in order to take advantage of their capabilities in the big data context. Moreover, several issues are posed by real-world complex big data problems besides from computational complexity, and big data mining techniques should be able to deal with challenges such as dimensionality, class-imbalance, and lack of annotated samples among others.

Addressing Big Data becomes a very interesting and challenging task where we must consider new paradigms to develop scalable algorithms. The MapReduce framework, introduced by Google, allows us to carry out the processing of large amounts of information. Its open source implementation, named Hadoop, has allowed the development scalable algorithm becoming de facto standard for addressing Big Data problems. Recently, new alternatives to the standard Hadoop-MapReduce framework have arisen to improve the performance in this scenario, being Apache Spark project the most relevant one. Even working on Spark, the MapReduce framework implies that existing algorithms need to be redesigned or new ones need to be developed in order to take advantage of their capabilities in the big data context.

Data science is a quite recent field of study, and it is still rapidly expanding. In other words, the open directions for novel research are particularly associated with the analysis of the application of fuzzy systems to emerging work scenarios in data science. Some clear examples are data streams, imbalanced classification, or big dimension, among others.

In this practical tutorial, we will first provide a gentle introduction to the problem of Big Data from the perspective of the development of fuzzy-based models. Then, we will dive into the field of Big Data analytics, describing several interesting case studies and real applications on the topic addressed with fuzzy approaches. Finally, we will run a hands-on session on Apache Spark and the Machine Learning library.

Table of contents and estimated duration:

  • An Introduction to Big Data Analytics with Fuzzy-based Models (30 minutes)
  • Case Studies in Data Science with Fuzzy Approaches (30 minutes)
  • Hands-on session (45 minutes)
  • Final Discussion (15 minutes)

Intended audience and expected enrollment:

This tutorial is aimed at all those researchers involved in the development of fuzzy models as well as evolutionary algorithms, providing them an overview of the existing technologies to deal with Big Data problems. The audience will also be able to understand the impact of the use of such kind of approaches in Data Science, in particular by means of our most recent research publications on the topic.

This proposal is built up on the success of our previous tutorials at WCCI 2016, CEC 2017, FUZZ-IEEE 2017, WCCI 2018, and FUZZ-IEEE2019. Prof. F. Herrera, Dr. I. Triguero, Dr. Mikel Galar, and Dr. Alberto Fernández delivered these tutorials in their different editions. In all these sessions, an average of 50 participants were involved. Thus, we expect a high number of participants as this topic continues to be an exciting and underexplored field that can have a great impact on the way we analyze and exploit data.

Organiser Bios (Click to expand)

Isaac Triguero received his M.Sc. and Ph.D. degrees in Computer Science from the University of Granada, Granada, Spain, in 2009 and 2014, respectively. He is currently an Assistant Professor in Data Science at the School of Computer Science of the University of Nottingham. He has published more than 35 international journal papers as well as more than 35 contributions to conferences. He is a Section Editor-in-Chief of the Machine Learning and Knowledge Extraction journal, and an associate editor of the Big Data and Cognitive Computing journal. He is also a reviewer of more than 30 international journals. He has acted as Program Co-Chair of the IEEE Conference on Smart Data (2016), the IEEE Conferenceon Big Data Science and Engineering (2017), and the IEEE International Congress on Big Data (2018). He has acted as guest editor for special issues in journals such as Information Sciences, Cognitive Computation, IEEE Access, and Big Data Analytics. His research interests include data mining, data reduction, biometrics, optimization, evolutionary algorithms, semi-supervised learning, bioinformatics and big data learning. Dr Triguero is currently leading a Knowledge Transfer Partnership project funded by Innovative UK and the energy provider E.ON that investigates Smart Metering data.

Alberto Fernández received the M.Sc. and Ph.D. degrees in computer science from the University of Granada, Granada, Spain, in 2005 and 2010, respectively. He is currently an Assistant Professor with the Department of Computer Science and Artificial Intelligence, University of Granada, Spain. He has published more than 100 papers in highly rated JCR journals and international conferences. In 2013, 2014, and 2017 Dr. Fernández received the University of Granada Prize for Scientific Excellence Works in the field of Engineering. He has also been awarded in 2011 with the Lofti A. Zadeh Best Paper prize (IFSA Association). He has been recently selected as a Highly Cited Researcher http://highlycited.com (in the field of Computer Science, 2017 Clarivate Analytics). He is member of editorial board of several JCR journals such as Applied Intelligence and Plos-One, and also acted as guest editor for special issues in Cognitive Computation and Big Data Analytics. He is also a member of the Spanish Association of Artificial Intelligence (AEPIA). His research interests include classification in imbalanced domains, fuzzy rule learning, evolutionary algorithms, multiclassification problems with ensembles and decomposition techniques, data science in big data applications and the field of Bioinformatics. He has been involved in several projects to apply these subjects of study into industry, healthcare, and economics, among others.

Mikel Galar received the M.Sc. and Ph.D. degrees in Computer Science in 2009 and 2012, both from the Public University of Navarra, Pamplona, Spain. He is currently an assistant professor at the Department of Statistics, Computer Science and Mathematics at the Public University of Navarre. He is the author of 35 published original articles in international journals and more than 50 contributions to conferences. He is also reviewer of more than 35 international journals. His research interests are data mining, classification, multi-classification, ensemble learning, evolutionary algorithms, fuzzy systems and big data. He is involved in the applications of these techniques to the industry, healthcare and remote sensing. He is a member of the IEEE, the European Society for Fuzzy Logic and Technology (EUSFLAT) and the Spanish Association of Artificial Intelligence (AEPIA). He has received the extraordinary prize for his PhD thesis from the Public University of Navarre and the 2013 IEEE Transactions on Fuzzy System Outstanding Paper Award for the paper “A New Approach to Interval-Valued Choquet Integrals and the Problem of Ordering in Interval-Valued Fuzzy Set Applications” (bestowed in 2016).

A Top-Down Approach to Rule-Based Fuzzy Systems

Organiser: Jerry Mendel (<jmmprof@me.com>).

Having recently completed the second edition of a textbook about rule-based fuzzy systems, I have become very concerned about the amount of time and background knowledge that are required before a new person is able to use such systems. Such a large investment of time on the part of a new person, who often needs a solution to a real-world problem within a modest amount of time, may drive that person to use what they think is a competitive methodology that does not require such a large investment. I believe that the major cause for the large amount of time required to learn rule-based fuzzy systems is the widely used bottom-up approach for developing and studying them. This tutorial explains a novel top-down approach to rule-based fuzzy systems, one that begins with the product and then addresses the unique features of the product, without requiring the reader to know anything about fuzzy sets and systems, so that the attendee has something to use with a very small investment of time.
The “products” for rule-based fuzzy systems are input-output equations (formulas). What a new end-user needs to know for each formula is: (1) a simple explanation for it, (2) what the unique features are for it, (3) how it can be used to solve real-world problems, (4) how it competes with formulas from other technologies that might also be used for the same real-world applications; and, (5) where a computer program can be found for it. This tutorial will illustrate the top-down approach beginning with simple products and concluding with advanced products and will address what an end-user needs to know in order to use it, i.e. items (1)–(4) just stated. Learning about rule-based fuzzy systems can be greatly compressed by using the top-down approach that is described in this Tutorial. It will let you cover them in an existing course in a small number of lectures.

Organiser Bio (Click to expand)

Jerry M. Mendel​ (LF’04) received the Ph.D. degree in electrical engineering from the Polytechnic Institute of Brooklyn, Brooklyn, NY. Currently, he is Emeritus Professor of Electrical Engineering at the University of Southern California in Los Angeles. He has published over 580 technical papers and is author and/or co-author of 13 books, including ​ Uncertain Rule-based Fuzzy Systems: Introduction and New Directions, 2nd ​ ed. ​ , ​ Perceptual Computing: Aiding People in Making Subjective Judgments ​ , and ​ Introduction to Type-2 Fuzzy Logic Control: Theory and Application. He is a Life Fellow of the IEEE, a Distinguished Member of the IEEE Control Systems Society, and a Fellow of the International Fuzzy Systems Association. He was President of the IEEE Control Systems Society in 1986, a member of the Administrative Committee of the IEEE Computational Intelligence Society for nine years, and Chairman of its Fuzzy Systems Technical Committee and the Computing With Words Task Force of that TC. Among his awards are the 1983 Best Transactions Paper Award of the IEEE Geoscience and Remote Sensing Society, the 1992 Signal Processing Society Paper Award, the 2002 and 2014 ​ Transactions on Fuzzy Systems Outstanding Paper Awards, a 1984 IEEE Centennial Medal, an IEEE Third Millenium Medal, a Fuzzy Systems Pioneer Award (2008) from the IEEE Computational Intelligence Society. His present research interests include: type-2 fuzzy logic systems and computing with words.

Besides fuzzy sets: new theories for promptly moving your crisp systems’ full power to the fuzzy side.

Organiser: Daniel Sanchez (<daniel@decsai.ugr.es>).

The first objective of the tutorial is to introduce the theory of Representation by Levels (RLs) to
the audience, as well as closely related theories such as Dubois and Prade’s Gradual Sets, and
Trevor Martin’s X-μ approach. These theories allow us to represent “fuzziness” in subsets of X by means of functions ranging from (0,1] to the power set of X, instead of the usual fuzzy set
approach, where membership functions range from X to [0,1]. Representations by levels can be used in order to represent “fuzziness” in other mathematical objects other than sets, like elements. For instance, real numbers affected by fuzziness are represented by functions ranging from (0,1] to the set of real numbers.

Representations by levels can be employed as a way to “fuzzify” any crisp system, algorithm,
concept or theory in a single, unique and straightforward way, and keeping all the properties of the original crisp counterpart. This is due to i) operations being performed in each level
independently, and ii) no restriction being required among levels (for instance, no nesting is
required in the representation for sets, contrary to alpha-cut representations of fuzzy sets). Such features cannot be provided by fuzzy set theories and their extensions; paradigmatic examples are the unability of fuzzy set theories to keep all the Boolean properties of set operations simultaneously, and the unability of fuzzy numbers to keep the algebraic structure of crisp numbers. This way, representation by levels allows to promptly adapt crisp systems to deal with fuzzy information without the necessity to determine the most appropriate operators to extend set operations, arithmetic operations, or any other kind of operator or procedure. This easyness, together with the unique feature that the resulting fuzzy system keeps all the properties of the crisp version, make Representations by Levels adequate for incorporating fuzzy sets in the development of systems that i) need to deal with fuzzy information (concepts, data, etc.), ii) need to keep properties that cannot be provided simultaneously by fuzzy sets, and iii) require the availability of promptly developed prototypes.

Though different in structure, operations and properties, RLs and fuzzy sets are tightly coupled and complementary. They are tightly coupled because i) the collection of alpha-cuts of a fuzzy set with a finite number of membership degrees (which is the case in practice for both
computers and humans) is a particular case of RLs, and ii) fuzzy sets can be viewed as probability measures defined on RLs (different RLs providing the same fuzzy set when measuring, with degrees of individual elements meaning “probability of membership to a level taken at random”). But they are also complementary in the sense that fuzzy sets are much easier to understand for users than RLs. These ideas lead to the notion of RL-systems: systems in which the input and output take the form of fuzzy sets, and where the internal representation and operations, hidden to the users, is performed by means of RLs. Remarkably i) the output fuzzy sets are not employed as input of another system, the RLs being used instead, and ii) the output fuzzy sets do not correspond to those that can be obtained by ordinary fuzzy set theories and extensions in general, as the latter do not keep the crisp properties of the original system in general.

Overall, the tutorial is intended to show the attendees tools and methodologies about how to
extend crisp systems to the fuzzy case as RL-systems, allowing for a prompt adaptation able to keep the properties of the crisp case, and based on fuzzy sets as input and output of the  system, hence improving understandability and taking benefit of the available techniques for
communicating with users using fuzzy set theory. This way, RLs are not conceived as a substitute or a better theory than fuzzy set theory, but a complement which enlarge our available toolbox for the development of fuzzy systems, concepts, algorithms and theories with new powerful capabilities, following the idea of RL-systems.

The tutorial will take the form of a presentation of the concepts as stated above, with theoretical background and both theoretical and practical examples illustrating the theory of
Representations by Levels and its application to specific problems like cardinality, probability
and quantification, arithmetics of RL-numbers, clustering, association rule mining and referring expression generation within Natural Language Generation, among others.

Outline of the covered material:
A tentative outline is the following:
1. Representations by levels: basic concepts and operations. Related theories.
2. From fuzzy sets to RLs and back: a non-biyective relation.
3. RL-systems: how to promptly take your crisp system to the fuzzy side.
4. Application examples.
5. Open challenges.

Though the first publications on the topic date back to 2008, RLs are not yet widely known by the fuzzy set community (not to mention outside). However, many papers with theoretical results and practical applications have been published since then, showing the suitability of the approach. In our opinion, RLs also have the potential to push fuzzy set theory into other scientific communities like those of Natural Language Generation or Artificial Intelligence in general. Particularly, in these communities there are several authors that point to a drawback of fuzzyset theories that prevent them from considering it in practice: the lack of clear rules for choosing the most suitable operators for each problem, and the unability to keep the ordinary properties of the crisp case. RL-systems are able to provide both things while allowing for partial membership both in the input/output by means of fuzzy sets, and in the internal representation and operations via RLs. In our conversations with some some relevant authors in the abovementioned communities, they have found the approach interesting and worth exploring.  On the other hand, the fuzzy set community is nowadays highly interested in promoting fuzzy sets in the AI community, and we envisage RLs as a way to offer a novel “fuzzy” tool able to possibly overcome existing reluctances to consider proposals coming from our area. Overall, we think that the timeliness of our proposal is justified and, if accepted, our intention is to disseminate the call for participation both within the fuzzy set and other communities such as those working in AI and NLG.

Proposed duration is one hour and a half. The proposer is one of the co-authors of the RL theory, and has been working and publishing results in the field since 2008, including collaborations with Didier Dubois, co-author of the closely related Gradual Sets. This collaboration led to a joint paper in which RLs are employed for developing a new fuzzy clustering technique, remarkably able to solve the problem posed by Bezdek and Harris in 1978 about how to obtain convex decompositions of similarity relations for clustering.

Organiser Bio (Click to expand)

Daniel Sánchez received the M.S. and Ph.D. degrees in computer science, both from the University of Granada, Spain, in 1995 and 1999, respectively. Since 2017 he is Full Professor at the Department of Computer Science and Artificial intelligence of the University of Granada. He has published more than 200 papers in international journals and conferences on the topics of fuzzy set theory and representations by levels, cardinality and quantification, and its application in fuzzy systems, flexible querying, machine learning and data mining, data2text natural language generation, and linguistic summarization and description of data. He has organized several workshops and special sessions in these fields. He is currently Area Editor of the Elsevier International Journal Fuzzy Sets and Systems, and one of the coordinators of the DAMI Working group on Data Mining and Machine Learning of the European Society for Fuzzy Logic and Technology (EUSFLAT).

Building interpretable fuzzy inference systems in Python

Organisers: Uzay Kaymak  (<u.kaymak@ieee.org>), Marco Nobile (<m.s.nobile@tue.nl>), Simone Spolaor (<m.s.nobile@tue.nl>), Caro Fuchs (<c.e.m.fuchs@tue.nl>).

One definition of artificial intelligence (AI) is “building systems that act rationally”, an approach that represented the driving force of AI research in the last decade (Russel & Norvig, 2000). However, the practical deployment of AI-based decision systems is hampered in all domains in which liability plays a major role (e.g., health-care). As a matter of fact, regardless of the performances of a system, the decisions that an AI suggests do not generate trust as long as their rationale cannot be inspected and understood by domain experts. Stated otherwise, the development of interpretable systems represents an enabling technology for widespread adoption of AI. It is also worth noting that even the EU General Data Protection Regulation explicitly mandates the “right of explanation” (see, e.g., Recital 71 of EU GDPR), so that no black-box decision system should be used to make automatic decisions, having legal effects on a subject, if they cannot be explained.
This tutorial is designed to describe how interpretable AI systems can be easily created by using fuzzy systems, either employing domain knowledge or according to a data-driven approach. At the end of the tutorial, attendees will be able to create interpretable AI systems in the form of fuzzy models, by leveraging two recently published Python libraries, namely Simpful and pyFUME. Attendees will be guided in the creation of such models and the evaluation of their results. In order to do so, they will be provided with some usage examples based on bio-medical use cases and invited to apply the presented methods to their own data sets.

The session can be subdivided into three main Sections:
1. theoretical background (15 mins + 5 mins Q&A);
2. knowledge-driven modeling (20 mins + 10 mins Q&A)
3. data-driven modeling (40 mins + 10 mins Q&A)
In Section #1, we will explain the motivation for the development of interpretable AI systems. Then, we will explain the theoretical foundations of our approach based on Fuzzy Logic.
Section #2 will be focused on the implementation according to domain knowledge. The library exploited for this part will be Simpful. Simpful’s syntax and API will be described in detail.
In Section #3, will be focused on the implementation according to domain knowledge. The library exploited for this part will be pyFUME. The functioning and the API of pyFUME will be described in detail.

The tutorial assumes knowledge of the Python 3 programming language. Explanations for the setup of the system (i.e., how to install and test Simpful and pyFUME) can be found at the following web address: bit.ly/tutorial-fuzzieee.

At the end of the tutorial, the attendees will be able to autonomously build up an interpretable AI system, either based on domain knowledge or automatically inferred from available data.

This tutorial will cover the motivations and the aims behind the choice of building interpretable AI systems. In particular, the tutorial will focus on the use of fuzzy systems as a tool to achieve more interpretable models in decision making and complex systems simulation. In order to facilitate aspiring scientists and newcomers to the field, a brief introduction to the fundamentals of fuzzy inference systems will be provided, including an outline of fuzzy set theory and fuzzy reasoning and logic.

The practical part of the tutorial will include the presentation of the two Python libraries (i.e., Simpful and pyFUME). In the Simpful session, attendees will learn how to define fuzzy inference systems within this novel library. Once attendees will be familiar with Simpful, the tutorial will move on to the next session involving data-driven approaches. In this session, attendees will learn to infer (minimal) fuzzy models from given data sets by exploiting pyFUME.

Potential audience: Both Simpful and pyFUME have been developed to fit the needs of academics and practitioners. They are both easy to use (because of their pre-implemented and user-friendly pipelines) and flexible for scientists that wish to fine-tune each step of the fuzzy model estimation process. We therefore envision that this tutorial will be useful to aspiring scientists and industrial partners, who wish to learn more about fuzzy models and their development, but also to senior scientists who search for a tool to help them conduct their studies. They will also be able to assess first-hand the advantages the existing libraries offer for their applications.

Plans for promotion: ​ Mailing lists (EUSFLAT, SIKS, IFSA, IEEE CIS Newsletter), ads in the context of the Summer School on Fuzzy Logic and Applications (including SFLA Facebook and Twitter page) and in the events organized by the IEEE CIS task force on advanced representation in biological and medical search and optimization.

Timeliness of the given tutorial: In the past few years, Artificial Intelligence (AI) has emerged as an important and far-reaching research topic with applications in many fields. This development has led to the recent trend to pursue interpretable or explainable AI (XAI), which is not only believed to build trust between the AI system and its user but can also give insights into the way decisions are taken and therefore create new knowledge.

In this tutorial, we show the participants how they can use their own data to create interpretable AI systems in the form of fuzzy models leveraging Simpful and pyFUME. Both Simpful and pyFUME were implemented in Python 3, one of the most popular and widespread programming languages. Thanks to its high readability and expressive power, it found application in several fields which deal with computer and data science, as well as in multidisciplinary research and industrial sectors. Both libraries were published in peer-reviewed journals during the last year and are available in public repositories. Participants will be provided with updated information and practical examples on the usage of the libraries. They will also be invited to employ the libraries on their own datasets and use-cases.

Duration: ​ 100 minutes (divided in 3 blocks with Q&As)

Organiser Bios (Click to expand)

Uzay Kaymak is a Full Professor and Chair of Information Systems in Health Care at Eindhoven University of Technology (TU/e). His research focuses on intelligent decision support systems, data and process mining and computational modeling methods. He has worked on the development of computational intelligence methods for decision models in which linguistic information, represented either as declarative linguistic rules derived from experts or obtained through natural language processing, is combined with numerical information extracted from data by computational and machine learning methods. Fuzzy set theory is at the basis of such models. The resulting (adaptive) decision support systems have been used in various fields such as financial decision-making, economic analysis and clinical decision support.

Marco S. Nobile is Assistant Professor at the Eindhoven University of Technology (TU/e), the Netherlands. He received his BSc, MSc and PhD in Computer Science from the University of Milano-Bicocca, Italy. His research focuses on the development of novel Computational Intelligence methods for bio-medical research, leveraging evolutionary computation, swarm intelligence and fuzzy logic. He is a member of the Bicocca Bioinformatics, Biostatistics and Bioimaging research center (Milano, Italy). He is also a member of the IEEE CIS Technical Committee on Bioinformatics & Bioengineering and he is also chair of the IEEE Task Force on advanced representation in biological and medical search and optimization.

Simone Spolaor is a Research Fellow at the Department of Informatics, Systems and Communication, University of Milano-Bicocca, Italy. His research focuses on the modeling, simulation and analysis of complex biochemical systems by means of Computational Intelligence methods, in particular by leveraging fuzzy logic and evolutionary computation.

Caro Fuchs pursues her Ph.D. degree in the Information Systems group of the Department of Industrial Engineering & Innovation Sciences at the Eindhoven University of Technology (TU/e). Her research interests include interpretable decision support systems, data analysis, machine learning and computational modeling methods, with a special focus on fuzzy inference systems. She mainly applies her research in the healthcare domain.

Explainable Fuzzy Systems: Paving the way from Interpretable Fuzzy Systems to Explainable Artificial Intelligence

Organisers: Jose Alonso (<josemaria.alonso.moral@usc.es>), Ciro Castiello (<ciro.castiello@uniba.it>), Corrado Mencar (<corrado.mencar@uniba.it>), Luis Magdalena (<luis.magdalena@upm.es>).

In the era of the Internet of Things and Big Data, data scientists are required to extract valuable knowledge from the given data. They first analyze, cure and pre-process data. Then, they apply Artificial Intelligence (AI) techniques to automatically extract knowledge from data. Actually, AI has been identified as the “most strategic technology of the 21st century” and is already part of our everyday life [1]. The European Commission states that “EU must therefore ensure that AI is developed and applied in an appropriate framework which promotes innovation and respects the Union’s values and fundamental rights as well as ethical principles such as accountability and transparency”. It emphasizes the importance of eXplainable AI (XAI in short), in order to develop an AI coherent with European values: “to further strengthen trust, people also need to understand how the technology works, hence the importance of research into the explainability of AI systems”. Moreover, as remarked in the last challenge stated by the USA Defense Advanced Research Projects Agency (DARPA), “even though current AI systems offer many benefits in many applications, their effectiveness is limited by a lack of explanation ability when interacting with humans”. Accordingly, users without a strong background on AI, require a new generation of XAI systems. They are expected to naturally interact with humans, thus providing comprehensible explanations of decisions automatically made.

XAI is an endeavor to evolve AI methodologies and technology by focusing on the development of agents capable of both generating decisions that a human could understand in a given context, and explicitly explaining such decisions. This way, it is possible to verify if automated decisions are made on the basis of accepted rules and principles, so that decisions can be trusted and their impact justified.
Even though XAI systems are likely to make their impact felt in the near future, there is a lack of experts to develop the fundamentals of XAI, i.e., ready to develop and to maintain the new generation of AI systems that are expected to surround us soon. This is mainly due to the inherent multidisciplinary character of this field of research, with XAI researchers coming
from heterogeneous research fields. Moreover, it is hard to find XAI experts with a holistic view as well as wide and solid background regarding all the related topics.
Consequently, the main goal of this tutorial is to provide attendees with a holistic view of fundamentals and current research trends in the XAI field, paying special attention to fuzzy-grounded knowledge representation and how to enhance human-machine interaction.

Plan
The tutorial will cover the main theoretical concepts of the topic, as well as examples and real applications of XAI techniques. In addition, ethical and legal aspects concerning XAI will also be considered. 
The tutorial duration is about 2 hours, with four blocks of 30min each covering the different questions under consideration:

  • We will first introduce the general ideas behind XAI by referring to real-world problems that would take great benefit from XAI technologies. Also, some of the most recent governmental and social initiatives which favor the introduction of XAI solutions in industry, professional activities and private lives will be highlighted. This part is therefore devoted to motivating the audience on the potential impact of XAI in everyday life and, therefore, on the importance of its scientific and technical investigation.
  • The second part of the tutorial will be devoted to a gentle introduction of the main methods for XAI at the state of the art. This part is cross-field, but it globally falls within the realm of Computational Intelligence. The idea of “opening the black box” will be stressed (being the “black-boxes” models designed through Machine Learning techniques, such as deep neural networks), as well as several approaches for dealing with the concept of “explanation”. Special attention will be focused on natural language explanation (such as, natural language generation of explanations, argumentative techniques, human-machine interactions, etc.).
  • The third part of the tutorial will take into account the special role of fuzzy logic in XAI. It will be shown that fuzzy logic offers special features that enable a rich representation of concepts expressible in natural language, therefore it might be a privileged choice for explanation generation and processing. Interpretable fuzzy systems use fuzzy logic to represent knowledge that is easy to be read and understood by users: hence, they will be revisited from the point of view of XAI and natural language generation.
  • Space will be reserved to address some currently open research directions, such as the evaluation of XAI systems, to stimulate the audience, especially young researchers, to explore and contribute to this new field. Finally, the tutorial will offer also the opportunity to present software tools implementing some XAI technologies. Some of these tools will be used to show a use case in order to make XAI results tangible to the audience.

Intended Audience
This tutorial is of interest for researchers, practitioners and students (PhD or Master students) working in the fields of Artificial and Computational Intelligence; with special emphasis on Fuzzy Logic. Since our aim is to provide attendees with a holistic view of fundamentals and current research trends in the XAI field, and having in mind the broad interest of the topic, the presentations will be designed to be accessible to everyone no matter their background.

Outline of the Covered Material

  1. Introduction: Motivating Principles and Definitions.
  2. Review of the most Outstanding Approaches for Designing and Developing XAI systems
    1. Review of the most outstanding approaches for Opening Black Boxes
    2. Review of Natural Language Technology for XAI
    3. Review of Argumentation Technology for XAI
    4. Review of Interactive Technology for XAI
    5. Review of Software Tools for XAI
  3. Fuzzy Technology for XAI
    1. Building Interpretable Fuzzy Systems
    2. Building Explainable Fuzzy Systems
    3. Software Tools
  4. Evaluation of XAI systems
  5. Use Case
  6. Concluding Remarks

Organiser Bios (Click to expand)

Jose M. Alonso​ received his M.S. and Ph.D. degrees in Telecommunication Engineering, both from the Technical University of Madrid (UPM), Spain, in 2003 and 2007, respectively. Since June 2016, he is postdoctoral researcher at the University of Santiago de Compostela, in the Research Centre in Intelligent Technologies (CiTIUS). He is currently Chair of the ​ Task Force on “Explainable Fuzzy Systems”​ in the Fuzzy Systems Technical Committee of the IEEE Computational Intelligence Society, Associate Editor of the ​ IEEE Computational Intelligence Magazine​ (ISSN:1556-603X), secretary of the ​ ACL Special Interest Group on Natural Language Generation​ , and coordinator of the H2020-MSCA-ITN-2019 project entitled “Interactive Natural Language Technology for Explainable Artificial Intelligence” (​ NL4XAI​ ). He has published more than 140 papers in international journals, book chapters and in peer-review conferences. His research interests include computational intelligence, explainable artificial intelligence, interpretable fuzzy systems, natural language generation, development of free software tools, etc.

Ciro Castiello​ , Ph.D. graduated in Informatics in 2001 and received his Ph.D. in Informatics in 2005. Currently he is an Assistant Professor at the Department of Informatics of the University of Bari Aldo Moro, Italy. His research interests include: soft computing techniques, inductive learning mechanisms, interpretability of fuzzy systems, eXplainable Artificial Intelligence. He participated in several research projects and published more than seventy peer-reviewed papers. He is also regularly involved in the teaching activities of his department. He is member of the European Society for Fuzzy Logic and Technology (EUSFLAT) and of the INdAM Research group GNCS (Italian National Group of Scientific Computing).

Corrado Mencar​ is Associate Professor in Computer Science at the Department of Computer Science of the University of Bari “A. Moro”, Italy. He graduated in 2000 in Computer Science and in 2005 he obtained the title of PhD in Computer Science. In 2001 he was analyst and software designer for some Italian companies. Since 2005 he has been working on research topics concerning Computational Intelligence and Granular Computing. As part of his research activity, he has participated in several research projects and has published over one hundred peer-reviewed international scientific publications. He is also Associate Editor of several international scientific journals, as well as Featured Reviewer for ACM Computing Reviews. He regularly organizes scientific events related to his research topics with international colleagues. Currently, he is Vice-chair of the IEEECIS ​ Task Force on “Explainable Fuzzy Systems”​ . Research topics include fuzzy logic systems with a focus on Interpretability and Explainable Artificial Intelligence, Granular Computing, Computational Intelligence applied to the Semantic Web, and Intelligent Data Analysis. As part of his teaching activity, he is, or has been, the holder of numerous classes and PhD courses on various topics, including Computer Architectures, Programming Fundamentals, Computational Intelligence and Information Theory.

Luis Magdalena​ is Full Professor with the Dept. of Applied Mathematics for ICT of the Universidad Politécnica de Madrid. From 2006 to 2016 he was Director General of the European Centre for Soft Computing in Asturias (Spain). Under his direction, the Center was recognized with the IEEE-CIS Outstanding Organization Award in 2012. Prof. Magdalena has been actively involved in more than forty research projects. He has co-author or co-edited ten books including “Genetic Fuzzy Systems”, “Accuracy Improvements in Linguistic Fuzzy Modelling”, and “Interpretability Issues in Fuzzy Modeling”. He has also authored over one hundred and fifty papers in books, journals and conferences, receiving more than 6000 citations. Prof. Magdalena  has been President of the “European Society for Fuzzy Logic and Technologies”, Vice-president of the “International Fuzzy Systems Association” and is Vice-President for Technical Activities of the IEEE Computational Intelligence Society for the period 2020-21. Further info at ​ https://sites.google.com/view/tutorial-on-xai-fuzzieee2021

Fuzzy Systems for Brain Sciences & Interfaces.

Organisers: Javier Andreu-Perez (<javier.andreu@essex.ac.uk>), Ching Teng Lin (<chin-teng.lin@uts.edu.au>).

In this tutorial will be introduced the latest work in the involvement of fuzzy systems in neuroscience and neuro-engineering. Given the important challenges associated with the processing of brain signals obtained from neuroimaging modalities, fuzzy sets and systems have been proposed as a useful and effective framework for the analysis of brain activity as well as to enable a direct communication pathway between the brain and external devices (brain computer/machine interfaces). While there has been increasing interest in these questions, the contribution of fuzzy logic sets, and systems has been diverse depending on the area of application in neuroscience. On the one hand, considering the decoding of brain activity fuzzy sets and systems represent an excellent tool to overcome the challenge of processing extremely noisy signals that are very likely to be affected by high uncertainty. On the other hand, as regards neuroscience research, fuzziness has equally been employed for the measurement of smooth integration between synapses, neurons, and brain regions or areas. In this context, the proposed tutorial aims at introducing the latest advancements and enabling a discussing among researchers interested in employing fuzzy sets, logic and systems for the analysis of brain signals and neuroimaging data, including related disciplines such as computational neuroscience, brain computer/machine interfaces, neuroscience, neuroinformatics, neuroergonomics, affective neuroscience, neurobiology, brain mapping, neuroengineering, and neurotechnology. In this tutorial we will explain the current-state of the art of the application of fuzzy systems in the domain of neuroscience. Moreover, we will also go explain their potential applications in the field of Brain-Computer Interfaces, and their performance with respect to competing methods.

Outline of the covered material

In this tutorial is intended to provide to the attendants the needed knowledge in the field of neuroscience and of fuzzy systems to be able to produce work in the field. The tutorial will not require the attendants to have a prior knowledge on either neuroscience or fuzzy systems. The format of the session will cover the following syllabus:

  1. Introduction about Non-Invasive Neuroimaging methods (EEG & fNIRS)
  2. Rationale about Fuzzy Systems in Neuroscience.
  3. Introduction to methodologies for Analysis of Brain Signals in Neuroscience (connectomes, brain activation analysis, ROI analysis, brain decoding).
  4. Introduction to the mechanics of helpful structures in Fuzzy Logic and Systems to use with Brain data (Type-1 Sets Extensions, Neuro-Fuzzy, Evolutionary Fuzzy Models, fuzzy integrals and fuzzy entropies).
  5. Current advanced methods of Fuzzy Systems for Neuroscience Applications.
  6. Current advanced methods of Fuzzy Systems for Brain Computer Interfaces.
  7. Publicly available software and toolboxes.

Plan of the session:
The session is planned to last 90 minutes and will consist of three main sessions (chapters).

  1. First Session (45 minutes): Covering points 1-4 (introduction)
  2. Second Session (30 minutes): Covering points 5 – 6.
  3. Third Session (15 minutes): Covering point 7.

Justification:
The rationale behind this tutorial is very timely. Current methodologies for the analysis of brain signals used in Neuroscience are very basic and sometime over-simplistic. The use of Machine Learning has just timidly started to be implemented in the field. Usually, very simple statistical methods are used to analyse the highly chaotic and noisy signals arising from neuroimaging. Fuzzy Systems have brought a better performance into many fields and to model complex systems. In neuroscience, they can furnish investigators with a new set of
the research methods that will help them to maximise the informational power of non-invasive neuroimaging techniques such as EEG and fNIRS. In comparison with other highly expensive neuroimaging modalities such as fMRI or DTI, non-invasive techniques of neuroimaging can be used in more naturalistic scenarios to study the brain at work. Also, they can power neuro-prothesis or help in rehabilitation tasks for brain injury. The issue is that such non-invasive techniques provides a vision of the neural processes that is less precise than their expensive counterparts. The application of fuzzy systems in this domain is helping to get out of the most of these non-invasive neuro-imaging modalities, and in this tutorial, we will introduce attendants with all major developments to date.

Organiser Bios (Click to expand)

Javier Andreu-Perez is Senior Lecturer in the School of Computer Science and Electronic Engineering (CSEE), University of Essex, United Kingdom. He is also Senior Research Fellow at the Systems Based on Fuzzy Decision Analysis at University of Jaen. His research expertise  lies in the development of new methods in artificial intelligence and machine learning within the healthcare domain, particularly in the analysis of bio-medical inertial, dynamical and neuroimaging data. He has expertise in the use of Big Data, machine learning models based on deep learning and methodologies of uncertainty modelling for highly noisy non-stationary signals. Javier has published more than 50 articles including several prestigious IEEE Transactions and other Top Q1 journals in Artificial Intelligence and Neuroscience. In total Javier’s work in Artificial Intelligence and Biomedical engineering has attracted more than 1800 citations (h-index: 13). Javier has participated in awarded projects from The Innovate UK Council, NIHR Biomedical Research Centre, Welcome Trust Centre for Global Health Research and private corporations). He is also Associate/Area Editor of several top journals in the field of theories of learning, computational intelligence and emergent technologies.

Ching-Teng Lin, Professor Chin-Teng Lin received the B.S. degree from National Chiao-Tung University (NCTU), Taiwan in 1986, and the Master and Ph.D. degree in electrical engineering from Purdue University, USA in 1989 and 1992, respectively. He is currently the Distinguished Professor of Faculty of Engineering and Information Technology, and Co-Director of Center for Artificial Intelligence, University of Technology Sydney, Australia. He is also invited as Honorary Chair Professor of Electrical and Computer Engineering, NCTU, International Faculty of University of California at San-Diego (UCSD), and Honorary Professorship of University of Nottingham. Dr. Lin was elevated to be an IEEE Fellow for his contributions to biologically inspired information systems in 2005 and was elevated International Fuzzy Systems Association (IFSA) Fellow in 2012. Dr. Lin received the IEEE Fuzzy Systems Pioneer Awards in 2017. He served as the Editor- in-chief of IEEE Transactions on Fuzzy Systems from 2011 to 2016. He also served on the Board of Governors at IEEE Circuits and Systems (CAS) Society in 2005-2008, IEEE Systems, Man, Cybernetics (SMC) Society in 2003-2005, IEEE Computational Intelligence Society in 2008-2010, and Chair of IEEE Taipei Section in 2009-2010. Dr. Lin was the Distinguished Lecturer of IEEE CAS Society from 2003 to 2005 and CIS Society from 2015-2017. He serves as the Chair of IEEE CIS Distinguis hed Lecturer Program Committee in 2018~2019. He served as the Deputy Editor-in-Chief of IEEE Transactions on Circuits and Systems-II in 2006-2008. Dr. Lin was the Program Chair of IEEE International Conference on Systems, Man, and Cybernetics in 2005 and General Chair of 2011 IEEE International Conference on Fuzzy Systems. Dr. Lin is the co-author of Neural Fuzzy Systems (Prentice-Hall), and the author of Neural Fuzzy Control Systems with Structure and Parameter Learning (World Scientific). He has published more than 300 journal papers (Total Citation: 20,163, H-index: 65, i10-index: 254) in the areas of neural networks, fuzzy systems, brain computer interface, multimedia information processing, and cognitive neuro-engineering, including about 120 IEEE journal papers.

How big is too big? Clustering in (static) BIG DATA with the Fantastic 4

Organiser: James Bezdek (<jcbezdek@gmail.com>).

For this talk “big” refers to the number of samples (N) and/or number of dimensions ( P) in static sets of feature vector data; or the size of (NxN) (similarity or distance) matrices for relational clustering. Objectives of clustering in static sets of big numerical data are acceleration for loadable data and approximation for non-loadable data. The Fantastic Four are four basic (aka “naïve”) classical clustering methods:

Gaussian Mixture Decomposition (GMD, 1898)
Hard c-means (often called “k-means,” HCM, 1956)
Fuzzy c-means (reduces to hard k-means in the limit, FCM, 1973)
SAHN Clustering (principally single linkage (SL, 1909))

This talk describes approximation of literal clusters in non-loadable static data. The method is sampling followed by very fast (usually 1-2% of the overall processing time) non-iterative extension to the remainder of the data with the nearest prototype rule. Three methods of sampling are covered: random, progressive, and MaxiMin. The first three models apply to feature vector data and find partitions by approximately optimizing objective function models with alternating optimization (known as expectation-maximization (EM) for GMD). Numerical examples using various synthetic and real data sets (big but loadable) compare this approach to incremental methods (spH/FCM and olH/FCM) that process data chunks sequentially.

The SAHN models are deterministic, and operate in a very different way. Clustering in big relational data by sampling and non-iterative extension begins with visual assessment of clustering tendency (VAT/iVAT). Extension of iVAT to scalable iVAT (siVAT) for arbitrarily large square data is done with Maximin sampling, and affords a means for visually estimating the number of clusters in the literal MST of the sample. siVAT then marries quite naturally to single linkage (SL), resulting in two offspring: (exact) scalable SL in a special case; and clusiVAT for the more general case. Time and accuracy comparisons of clusiVAT are made to crisp versions of three HCM models; HCM (k-means), spHCM and olHCM; and to CURE. Experiments synthetic data sets of Gaussian clusters, and various real world (big, but loadable) are presented.

Organiser Bio (Click to expand)

Jim Bezdek, PhD, Applied Mathematics, Cornell, 1973; past president – NAFIPS, IFSA and IEEE CIS; founding editor – Int’l. Jo. Approximate Reasoning, IEEE Transactions on Fuzzy Systems;
Life fellow – IEEE and IFSA; awards : IEEE 3rd Millennium, IEEE CIS Fuzzy Systems
Pioneer, IEEE Frank Rosenblatt TFA, IPMU Kempe de Feret Award. Retired in 2007, coming
to a University near you soon (especially if there is good fishing).

New Optimization Techniques for TSK Fuzzy Systems: Classification and Regression

Organiser: Dongrui Wu (<drwu@hust.edu.cn>).

Description of the goal of the proposed tutorial (max. 4000 characters):
Takagi-Sugeno-Kang (TSK) fuzzy systems have achieved great success in numerous applications, including both classification and regression problems. Many optimization
approaches have been proposed for them.

There are generally three strategies for fine-tuning the TSK fuzzy system parameters after initialization: 1) evolutionary algorithms; 2) gradient descent (GD) based algorithms; and, 3) GD plus least squares estimation (LSE), represented by the popular adaptive-network-based fuzzy inference system (ANFIS). However, these approaches may have challenges when the size and/or the dimensionality of the data increase. Evolutionary algorithms need to keep a large population of candidate solutions, and evaluate the fitness of each, which result in high computational cost and heavy memory requirement for big data. Traditional GD needs to compute the gradients from the entire dataset to iteratively update the model parameters, which may be very slow, or even impossible, when the data size is very large. The memory requirement and computational cost of LSE also increase rapidly when the data size and/or dimensionality increase. Additionally, our research demonstrated that ANFIS may result in significant overfitting in regression problems.

This tutorial explains the functional equivalence between TSK fuzzy systems and certain neural networks, mixture of experts, CART and stacking ensembles; hence, optimization techniques for the latter may be used for TSK fuzzy systems. It then introduces some newly proposed novel techniques for mini-batch gradient descent (MBGD) based optimization of TSK fuzzy systems, for both classification and regression. These techniques were mainly extended from the training of deep neural networks and mixture-of-experts models. The proposed algorithms can be applied to datasets of any size, but are particularly useful for big data applications. Their source code is also available online. We believe this tutorial will further facilitate the applications of TSK fuzzy systems.

The session mainly consists of three parts:
1. Functional Equivalence between TSK Fuzzy Systems and Neural Networks, Mixture of Experts, CART and Stacking Ensemble Learning
Fuzzy systems have achieved great success in numerous applications. However, there are still many challenges in designing an optimal fuzzy system, e.g., how to efficiently optimize its parameters, how to balance the trade-off between cooperation and competitions among the rules, how to overcome the curse of dimensionality, how to increase its generalization ability, etc. Literature has shown that by making appropriate connections between fuzzy systems and other machine learning approaches, good practices from other domains may be used to improve the fuzzy systems, and vice versa. This part gives an overview on the functional equivalence between TSK fuzzy systems and four classic machine learning approaches — neural networks, mixture of experts, classification and regression trees (CART), and stacking ensemble regression — for regression problems. We also point out some promising new research directions, inspired by the functional equivalence, that could lead to solutions to the aforementioned problems. To our knowledge, this is so far the most comprehensive overview on the connections between fuzzy systems and other popular machine learning approaches, and hopefully will stimulate more hybridization between different machine learning algorithms.
2. New Optimization Technique for TSK Fuzzy Classifiers: Mini-Batch Gradient Descent, Uniform Regularization, and Batch Normalization
TSK fuzzy systems are flexible and interpretable machine learning models; however, they may not be easily optimized when the data size is large, and/or the data dimensionality is high. This part proposes a MBGD based algorithm to efficiently and effectively train TSK fuzzy classifiers. It integrates two novel techniques: 1) uniform regularization (UR), which forces the rules to have similar average contributions to the output, and hence to increase the generalization performance of the TSK classifier; and, 2) batch normalization (BN), which extends BN from deep neural networks to TSK fuzzy classifiers to expedite the convergence and improve the generalization performance. Experiments on 12 UCI datasets from various application domains, with varying size and dimensionality, demonstrated that UR and BN are effective individually, and integrating them can further improve the classification performance.
3. New Optimization Technique for TSK Fuzzy Regression Models: Mini-Batch Gradient Descent, Regularization, AdaBound, and DropRule
TSK fuzzy systems are very useful machine learning models for regression problems. However, to our knowledge, there has not existed an efficient and effective training algorithm that ensures their generalization performance, and also enables them to deal with big data. Inspired by the connections between TSK fuzzy systems and neural networks, we extend three powerful neural network optimization techniques, i.e., MBGD,regularization, and AdaBound, to TSK fuzzy systems, and also propose three novel techniques (DropRule, DropMF, and DropMembership) specifically for training TSK fuzzy systems. Our final algorithm, MBGD with regularization, DropRule and AdaBound (MBGD-RDA), can achieve fast convergence in training TSK fuzzy systems, and also superior generalization performance in testing. It can be used for training TSK fuzzy systems on datasets of any size; however, it is particularly useful for big datasets, on which currently no other efficient training algorithms exist.

This session will be mainly powerPoint based tutorial. We will also show how the proposed
algorithms work in Matlab.
Outline of the covered material:
1. Introduction to TSK Fuzzy Systems
2. Functional Equivalence between TSK Fuzzy Systems and Neural Networks, Mixture of
Experts, CART and Stacking Ensemble Regression
3. New Optimization Technique for TSK Fuzzy Classifiers: Mini-Batch Gradient Descent,
Uniform Regularization, and Batch Normalization
4. New Optimization Technique for TSK Fuzzy Regression Models: Mini-Batch Gradient
Descent, Regularization, AdaBound, and DropRule
5. Concluding Remarks Justification, which should include the potential audience and the timeliness of the given tutorial, proposed duration, as well as the qualifications of the proposer(s):
Potential audience: Researchers, practitioners, and graduate students in fuzzy systems. 40
audiences are expected.
Timeliness: The tutorial is based on the following three latest research works.
1. D. Wu, C-T Lin, J. Huang* and Z. Zeng*, “On the Functional Equivalence of TSK Fuzzy Systems to Neural Networks, Mixture of Experts, CART, and Stacking Ensemble Regression,” IEEE Trans. on Fuzzy Systems, 28(10):2570-2580, 2020.
2. Y. Cui, D. Wu* and J. Huang*, “Optimize TSK Fuzzy Systems for Classification Problems: Mini-Batch Gradient Descent with Uniform Regularization and Batch Normalization,” IEEE Trans. on Fuzzy Systems, 2020, accepted.
3. D. Wu*, Y. Yuan, J. Huang and Y. Tan*, “Optimize TSK Fuzzy Systems for Regression Problems: Mini-Batch Gradient Descent with Regularization, DropRule and AdaBound
(MBGD-RDA),” IEEE Trans. on Fuzzy Systems, 28(5):1003-1015, 2020.

Organiser Bio (Click to expand)

Dongrui Wu received a B.E in Automatic Control from the University of Science and Technology of China, Hefei, China, in 2003, an M.Eng in Electrical and Computer Engineering from the National University of Singapore in 2005, and a PhD in Electrical Engineering from the University of Southern California, Los Angeles, CA, in 2009. He was a Lead Researcher at GE Global Research, NY, and a Chief Scientist of several startups. He is now a Professor and Deputy Director of the Key Laboratory of the Ministry of Education for Image Processing and
Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, China.
Prof. Wu’s research interests include affective computing, brain-computer interface, computational intelligence, and machine learning. He has more than 160 publications (6,800+ Google Scholar citations; h=40), including a book “Perceptual Computing” (Wiley-IEEE Press, 2010), and five US/PCT patents. He received the IEEE International Conference on Fuzzy Systems Best Student Paper Award in 2005, the IEEE Computational Intelligence Society (CIS) Outstanding PhD Dissertation Award in 2012, the IEEE Transactions on Fuzzy Systems Outstanding Paper Award in 2014, the North American Fuzzy Information Processing Society (NAFIPS) Early Career Award in 2014, the IEEE Systems, Man and Cybernetics (SMC) Society Early Career Award in 2017, and the IEEE SMC Society Best Associate Editor Award in 2018, the USERN Prize in Formal Sciences in 2020, and the IEEE International Conference on Mechatronics and Automation Best Paper Award in 2020. He was a finalist of the IEEE Transactions on Affective Computing Most Influential Paper Award in 2015, the IEEE Brain Initiative Best Paper Award in 2016, the 24th International Conference on Neural Information Processing Best Student Paper Award in 2017, the Hanxiang Early Career Award in 2018, and the USERN Prize in Formal Sciences in 2019. He was a selected participant of the Heidelberg Laureate Forum in 2013, the US National Academies Keck Futures Initiative (NAKFI) in 2015, and the US National Academy of Engineering German-American Frontiers of Engineering (GAFOE) in 2015. His team won the First Prize of the China Brain-Computer Interface Competition in 2019.
Prof. Wu is an Associate Editor of the IEEE Transactions on Fuzzy Systems (2011-2018; 2020-), the IEEE Transactions on Human-Machine Systems (since 2014), the IEEE Computational Intelligence Magazine (since 2017), and the IEEE Transactions on Neural Systems and Rehabilitation Engineering (since 2019). He was the lead Guest Editor of the IEEE Computational Intelligence Magazine Special Issue on Computational Intelligence and Affective Computing, and the IEEE Transactions on Fuzzy Systems Special Issue on Brain Computer Interface. He is a Senior Member of the IEEE, a Board member and Distinguished Speaker of the NAFIPS, and a member of IEEE Systems, Man and Cybernetics Society Brain-Machine Interface Systems Technical Committee, IEEE CIS Fuzzy Systems Technical  committee,
Emergent Technologies Technical Committee, and Intelligent Systems Applications Technical Committee. He has been Chair/Vice Chair of the IEEE CIS Affective Computing Task Force
since 2012.

Recent Developments in Artificial Intelligence and Fuzzy Systems

Organisers: Alexander Gegov (<alexander.gegov@port.ac.uk>), Uzay Kaymak (<u.kaymak@ieee.org>), Joao Sousa (<jmsousa@tecnico.ulisboa.pt>).

Tutorial goalThe tutorial highlights recent and current developments in AI and Fuzzy Systems. It has an educational focus that is complemented by latest research results in the field. The tutorial introduces basic concepts that are illustrated with a range of examples and case studies from different application areas.

Tutorial planThe tutorial includes two parts. The first part focuses on general aspects of AI and Fuzzy Systems. The second part focuses on specific aspects of AI and Fuzzy Systems. Both parts include discussion with the audience. The expected size of the audience is about 50 or more.

Tutorial outlineThe first part of the tutorial introduces recent developments, rational justification, current success, subject areas and wider popularity of AI. It highlights Human Intelligence as a role model for AI and Computational Intelligence as a driving force behind AI. Computational Intelligence techniques such as Fuzzy Systems, Neural Networks and Evolutionary Algorithms are discussed in the context of their ability to imitate different aspects of Human Intelligence in AI. Approaches such as Expert Systems and Machine Learning are also discussed in the context of their ability to work in a complementary way with knowledge and data.
The second part of the tutorial introduces Cybernetics and Complex Systems as research areas that are closely related and complementary to AI. It underlines the different aspects of system complexity and their impact on the performance of AI based models. Current hot topics such as Big Data, Deep Learning and Deep Fuzzy Systems are discussed in the context of feature extraction and selection as well as structural and parametric identification. Other AI related aspects such as current limitations, future developments, main challenges, application areas, case studies and performance evaluation are also discussed.

Tutorial justificationThe potential audience for the tutorial includes undergraduate students, postgraduate students, researchers, academics and practitioners. The proposed duration of the tutorial is 1.5 hours. The organisers are planning to set up a website to promote the tutorial. The tutorial topic is quite timely in the proposed educational and research setting in view of the current popularity of AI and its rising impact on everyday life. The tutorial proposers are established researchers in the field with many years of experience and involvement as Technical Committee Members for IEEE Societies and Associate Editors for IEEE Transactions on Fuzzy Systems.

Organiser Bios (Click to expand)

Alexander Gegov is Reader in Computational Intelligence in the School of Computing, University of Portsmouth, UK. He holds a PhD degree in Cybernetics and a DSc degree in Artificial Intelligence – both from the Bulgarian Academy of Sciences. He has been a recipient of a National Annual Award for Best Young Researcher from the Bulgarian Union of Scientists. He has been Humboldt Guest Researcher at the University of Duisburg in Germany. He has also been EU Visiting Researcher at the University of Wuppertal in Germany and the Delft University of Technology in the Netherlands.
Alexander Gegov’s research interests are in the development of computational intelligence methods and their application for modelling and simulation of complex systems and networks. He has edited 6 books, authored 5 research monographs and over 30 book chapters – most of these published by Springer. He has authored over 50 articles and 100 papers in international journals and conferences – many of these published and organised by IEEE. He has also presented over 20 invited lectures and tutorials – most of these at IEEE Conferences on Fuzzy Systems, Intelligent Systems, Computational Intelligence and Cybernetics.
Alexander Gegov is Associate Editor for ‘IEEE Transactions on Fuzzy Systems’, ‘Fuzzy Sets and Systems’, ‘Intelligent and Fuzzy Systems’, ‘Computational Intelligence Systems’ and ‘Intelligent Systems’. He is also Reviewer for several IEEE journals and National Research Councils. He is Member of the IEEE Computational Intelligence Society and the Soft Computing Technical Committee of the IEEE Society of Systems, Man and Cybernetics. He has also been Guest Editor for a recent Special Issue on Deep Fuzzy Models of the IEEE Transactions on Fuzzy Systems

Uzay Kaymak is a Full Professor and Chair of Information Systems in Health Care at Eindhoven University of Technology (TU/e). His research focuses on intelligent decision support systems, data and process mining and computational modeling methods. He has worked on the development of computational intelligence methods for decision models in which linguistic information, represented either as declarative linguistic rules derived from experts or obtained through natural language processing, is combined with numerical information extracted from data by computational and machine learning methods. Fuzzy set theory is at the basis of such models. The resulting (adaptive) decision support systems have been used in various fields such as financial decision-making, economic analysis and clinical decision support.

Towards Trusting AI/ML Decision-Making of Autonomous Systems <Cancelled>

Organiser: Stanton Price (<stanton.r.price@usace.army.mil>)

Autonomous systems have a growing presence throughout numerous industries; from self-driving cars to warehouse distribution robots, surgical robots, and defense systems. As these autonomous systems begin to make decisions of increasing complexity while simultaneously having potentially life-impacting consequences, it becomes critical to trust the decisions being made. Further, situations in which unexpected outcomes arise (good and/or bad), there is a need to be able to explain/understand why the machine performed the action(s) that led up to the unforeseen consequence. One way to get closer to obtaining trust in these systems is through understanding why decisions are made. This tutorial will cover the topic of gaining user trust in autonomous systems through understanding AI/ML decision-making process and how this area is uniquely primed for fuzzy set theory. Additionally, this tutorial will take a hybrid approach, mixing interactive learning experiences with a ‘reverse’ problem-owner-led tutorial seeking academic engagement to foster potential collaboration.

Tutorial Outline
Introduction to autonomous systems (more heavily focused on autonomous ground
vehicles) (5-10 minutes) Overview of fuzzy sets (5-10 minutes) Overview of numerous eXplainable AI methodologies (5-10 minutes) Demo (20-30 minutes): Deep dive of the demo
Identify task-specific goals (e.g., identify obstacles that could impede autonomous ground vehicle mobility)

  • Identify holes/gaps in current techniques
  •  Identify areas where fuzzy set theory could enhance understanding of the decision-making process

Deep dive into identified holes/gaps and how fuzzy set theory could play a key role
in addressing the identified shortcomings (10-15 minutes) ‘Reverse’ the tutorial, probe for academic engagement and collaboration (15-30 minutes)

Justification
Given the relevancy of autonomous systems, mixed with the growing interest in AI/ML explainability of these systems, this tutorial is expected to draw an appreciable audience.
Further, the audience is expected to expand across industry, government, and academia. This tutorial will be promoted throughout various U.S. DoD agencies and Research Centers (e.g., Army, Navy, ERDC, etc.) and through academic collaborators (both U.S. and international). As detailed above in the Outline, this tutorial is expected to last approximately 1.5 hours. Dr. Price leads multiple projects that include aiding the development of state-of-the-art autonomous systems as well as developing state-of-the-art XAI methodologies for various applications. He is well versed in these technologies and fuzzy set theory, providing insights into how fuzzy sets could play a critical role in understanding autonomous systems’ decision-making process.

Organiser Bio (Click to expand)

Dr. Stanton R. Price earned a Bachelor’s of Science degree in Biomedical Engineering in 2012 and a Ph.D. in Electrical and Computer Engineering from Mississippi State University in 2018, focused on computer vision (CV) and machine learning (ML). He is currently with the U.S. Army Engineer Research and Development Center (ERDC) as a Research Electrical Engineer where he leads multiple programs and has leadership roles across projects concerning artificial intelligence (AI) and ML. He served as the co-chair for a special session on soft computing for computer vision and pattern recognition at the 2019 FUZZ-IEEE conference and he serves as a technical reviewer for several conferences and journals. His research interests are computational intelligence, AI/ML, CV, fuzzy logic theory, pattern recognition, digital signal processing, and object detection & classification.

Uncertainty modeling in adversarial learning

Organiser: Xi-Zhao Wang (<xizhaowang@ieee.org>).

This tutorial with the title Uncertainty modeling in adversarial learning is aiming at providing participants with some fundamental concepts of uncertainty such as fuzziness and ambiguity, some theoretical analysis and key techniques on uncertainty modeling, and their applications to improve the adversarial robustness of deep neural networks.

Specifically, the tutorial goal is
(1) To provide students or young scholars (who just entered this area) with a basic course of machine learning under uncertainty environment, focusing on uncertainty modeling and its impact in the whole process of learning on the performance of learning algorithms;
(2) To give an introductory course for participants who are working on the recently popular topic, adversarial learning, especially for these who are working on adversarial learning of some specific domains and want to use the uncertainty modeling tool to improve the adversarial robustness of their systems;
(3) To offer some useful guidelines for ongoing PhD students who are devoted to study their PhD projects to model and solve some key issues of uncertainty modeling in adversarial learning; and
(4) To talk with participants who are specialists or senior scholars about some new treads in the area of learning from fuzzy examples and adversarial learning in which uncertainty modeling plays a key role for the improvement of adversarial robustness.

The tutorial will contain the following content.
(1) Introduction to fuzziness and uncertainty. Uncertainty is a natural phenomenon in machine learning, which can be embedded in the entire process of data preprocessing, learning and reasoning. For example, the training samples are usually imprecise, incomplete or noisy, the classification boundaries of samples may be fuzzy, and the knowledge used for learning the target concept may be rough. Uncertainty can be used for selecting extended attributes and informative samples in inductive learning and active learning respectively. If the uncertainty can be effectively modeled and handled during the process of processing and implementation, machine learning algorithms will be more flexible and more efficient. This part will focus on the uncertainty definition and relationships/differences among different uncertainties.
(2) Adversarial examples. An adversarial example is an instance with intentional but small feature  perturbations that cause a machine learning model to make a false prediction. Usually, we call this process of generating perturbations as adversarial attack, and the ability of model to resist adversarial attack as adversarial robustness. Adversarial examples have become a bigger-than-ever problem with the quick developments of deep learning. Nowadays, applications of deep learning combined with real-world scenarios are becoming wider and deeper, for example, deep learning has been applied to autonomous driving. We have many adversarial cases in real life, for example, a self-driving car crashes into another car because it ignores a stop sign; someone places a picture over the sign, which looks like a stop sign with a little dirt for humans but is designed to resemble a parking prohibition sign for the car’s software recognition. It is undoubtedly very dangerous. This part will concentrate on the adversarial example challenges and their current handling strategies.
(3) Prediction uncertainty in adversarial robustness. This is to show that the representation, measure, and handling of uncertainty have a significant impact on the performance of adversarial robustness. Recent research shows that, through increasing model prediction uncertainty when training the model, the classification boundary surface of the model will be more balanced. It leads to the distance between boundary and data as far aspossible, which significantly increase the difficulty of attacker based on feature perturbation. This part will talk mainly about the relationship between prediction uncertainty and adversarial robustness.
(4) Analyzing the uncertainty of parameters in convolutional layers. Recent researches mainly focus on uncertainty of data and prediction by measuring the fuzziness, entropy or mutual information to improve adversarial robustness of model. However, in this part, we will introduce the uncertainty of parameters in convolutional layers of a deep neural network for attack to improve the adversarial robustness of model. Additionally, this part will introduce a Min-Max property existing in parameters of convolution process and theoretically show its validation. The parameters of convolutional layers would be interpreted from the perspective of uncertainty.
(5) Concluding remarks. This part will give an overview on learning with uncertainty from adversarial examples, summarizing the results acquired in recent study on adversarial learning problems.

Outline of the covered material:
The tutorial outline with brief elaboration is listed below.
1. A brief introduction to adversarial learning. This section introduces the existing definitions of adversarial examples, the explanation of existence of adversarial examples and the application of adversarial learning in various areas such classification, objective detection and semantic recognition etc. (13PPTs)
2. General challenge of adversarial learning. It includes explanation of existence of adversarial examples, constructing most powerful attack, constructing certified defense, obfuscated gradient, and accuracy- robustness. (15PPTs)
3. Current adversarial attacks and adversarial defenses. It covers white-box attacks, black-box attacks, distillation training, adversarial training and various normalization. (10PPTs)
4. Uncertainty. It covers definitions of several specific uncertainties including entropy, fuzziness, ambiguity, and more; the difference and similarity among several uncertainties; criterion of uncertainty modeling, representation, measure, and processing; impact of uncertainty processing on adversarial learning; what role it plays?; how does it affect adversarial learning?; and why does it work? (25PPTs)
5. Uncertainty in adversarial learning. This part includes (1) the relationship between the prediction uncertainty and adversarial robustness; (2) uncertainty of parameters in convolutional layers of a deep neural network for attack. (20PPTs)
6. Concluding remarks. It gives some comments on how to modeling and handling uncertainty of the model to improve the adversarial robustness of a deep neural network with convolution layers. (3PPTs)

Justification, which should include the potential audience and plans for promotion, the timeliness of the given tutorial, proposed duration, as well as the qualifications of the proposer(s): The potential audience includes (1) students or young scholars who want to have a course of adversarial learning with uncertainty modeling; (2) participants who are working on adversarial examples and want to use the uncertainty
modeling tool to improve their learning system performance; (3) ongoing PhD students whose PhD project if related to learning with uncertainty; and (4) specialists or senior scholars who want briefly know some new trends in uncertainty modeling in adversarial learning.
The total number of participants is expected to be 100 more or less. In schedule, the tutorial id delivered in 2 hours with a 20-minute break. Specifically, based on the contend table in previous sections, the time is allocated as: A brief introduction to adversarial learning (15 minutes), General challenge of adversarial learning (10 minutes), Current strategy of adversarial attack and defense (10 minutes), Uncertainty (25 minutes), Uncertainty in adversarial learning (35 minutes), Concluding remarks (5 minutes).

Organiser Bio (Click to expand)

Xi-Zhao Wang, PhD, Professor in Shenzhen university, CAAI Fellow, IEEE Fellow, the previous BoG member of IEEE SMC society, the chair of IEEE SMC Technical Committee on Computational Intelligence, the Chief Editor of Springer Journal – Machine Learning and Cybernetics. Major research interests of Prof. Wang include uncertainty modeling and machine learning. Prof. Wang has edited 10+ special issues; published 3 monographs, 2 textbooks, and 200+ peer-reviewed research papers; and supervised more than 150 Mphil and PhD students. Prof. Wang was a distinguished lecturer of the IEEE SMCS.

Using intervals to capture and handle uncertainty

Organisers: Christian Wagner (<christian.wagner@nottingham.ac.uk>), Vladik Kreinovich (<vladik@utep.edu>), Shaily Kabir (<shaily.kabir@nottingham.ac.uk>), Zack Ellerby (<zack.ellerby@nottingham.ac.uk>).

Uncertainty is pervasive across data and data sources, from sensors in engineering applications to human perceptions (consumer preferences, patients’ assessment of pain, voters’ views in anticipation of elections), and the expertise of experts in areas as diverse as marketing to cyber security. Appropriate handling of such uncertainties depends upon three main stages: capture, modelling, and appropriate AI based handling of the uncertain information.
This tutorial will explain the nature of uncertainty at the level of individual data sources (such as people, sensors, etc.) highlighting the difference between between- (or inter-) and within- (intra-) source uncertainty. Focussing on the uncertainty arising from human-sourced data, the organisers will take participants through an end-to-end exercise in capturing and analysing interval-valued data, exploring the research question of ‘What are the attitudes to AI’ live with participants. Participants will specifically learn about intervals as a model to capture and model uncertainty, including the complexity and potential of associated techniques, from the basics (e.g. interval arithmetic), to current frontier topics underpinning the role of interval-valued data in AI (e.g. interval-valued regression). The tutorial is designed to be suitable for both students and academics/professionals from academia and industry without requiring prior expertise in the area.

This tutorial is designed to give researchers a practical introduction to the use of intervals for handling uncertainty in data. The tutorial will discuss relevant types and sources of uncertainty before proceeding to review and demonstrate practical approaches and tools that enable the capture, modelling and analysis of interval-valued data. This session will provide participants with an end-to-end overview and in-depth starting point for leveraging intervals within their own research.

Timing: The overall time of the tutorial will be three hours, with approximately 40 minutes per section, and a 20 minute break.

Pre-requisites: There are no pre-requisites for this tutorial, although a familiarity with fuzzy sets will be an advantage.

Organiser Bios (Click to expand)

Christian Wagner (SM’13) received his Ph.D. in Computer Science in 2009 from the University of Essex, UK. He is a Professor of Computer Science at the University of Nottingham, and founding director of the Lab for Uncertainty in Data and Decision Making (LUCID). He has published over 100 peer-reviewed articles, focusing on modelling & handling of uncertain data arising from heterogeneous data sources (e.g. stakeholders), with a particular emphasis on designing interpretable AI based decision supportsystems. In 2017, he was recognised as a RISE (Recognising Inspirational Scientists and Engineers) Connector by EPSRC. His work ranges from decision support in cyber security and environmental management to personalisation and control in manufacturing. He has led ten and co-led research
projects with partners from industry and government with an overall value of around £10m and co/developed multiple open source software frameworks, making cutting edge research accessible to research communities beyond computer science. Dr Wagner is an Associate Editor of the IEEE Transactions on Fuzzy Systems journal, Chair of the IEEE CIS Technical Committee on Fuzzy Systems and Task Force on Cyber Security; elected member-at-large of the IEEE Computational Intelligence Society (CIS) Administrative Committee for 2018-2020. He is a General Co-Chair of Fuzz-IEEE 2021 in Luxembourg and Special Sessions Chair at Fuzz-IEEE 2019 in New Orleans.

Vladik Kreinovich received his MS in Mathematics and Computer Science from St. Petersburg University, Russia, in 1974, and PhD from the Institute of Mathematics, Soviet Academy of Sciences, Novosibirsk, in 1979. From 1975 to 1980, he worked with the Soviet Academy of Sciences; during this time, he worked with the Special Astrophysical Observatory (focusing on the representation and processing of uncertainty in radioastronomy). For most of the 1980s, he worked on error estimation and intelligent information processing for the National Institute for Electrical Measuring Instruments, Russia. In 1989, he was a visiting scholar at Stanford University. Since 1990, he has worked in the Department of Computer Science at the University of Texas at El Paso. In addition, he has served as an invited professor in Paris (University of Paris VI), France; Hannover, Germany; Hong Kong; St. Petersburg and Kazan, Russia; and Brazil. His main interests are the representation and processing of uncertainty, especially interval computations and intelligent control. He has published eight books, 24 edited books, and more than 1,500 papers. Vladik is a member of the editorial board of the international journal “Reliable Computing” (formerly “Interval Computations”) and several other journals. In addition, he is the co-maintainer of the international Web site on interval computations http://www.cs.utep.edu/interval-comp. Vladik is Vice President of the International Fuzzy Systems Association (IFSA), Vice President of the European Society for Fuzzy Logic and Technology (EUSFLAT), Fellow of International Fuzzy Systems Association (IFSA), Fellow of Mexican Society for Artificial Intelligence (SMIA), Fellow of the Russian Association for Fuzzy Systems and Soft Computing; he served as Vice President for Publications of IEEE Systems, Man, and Cybernetics Society 2015-18, and as President of the North American Fuzzy Information Processing Society 2012-14; is a foreign member of the Russian Academy of Metrological Sciences; was the recipient of the 2003 El Paso Energy Foundation Faculty Achievement Award for Research awarded by the University of Texas at El Paso; and was a co-recipient of the 2005 Star Award from the University of Texas System.

Shaily Kabir received Ph.D. in Computer Science from the University of Nottingham, UK in 2020. Prior to that, she obtained B.Sc. (hons.) and M.S. degrees in Computer Science and Engineering from the University of Dhaka, Bangladesh. Later, she received M.S. in Computer Science (research) from Concordia University, Canada. She is currently working as an ‘Associate Professor’ in Computer Science and Engineering at the University of Dhaka, Bangladesh. She has been publishing in peer-reviewed international journals since 2006 and a co-author of a good number of articles. She is also a Commonwealth Scholar and won the IEEE computational intelligence society (CIS) graduate research grant in 2018. Her main research interest focuses on robust comparison of uncertain data arising from multiple data sources and their efficient fusion. Her other research interests include qualitative and quantitative data analysis particularly with uncertain data, clustering and classification, data mining, machine learning, and image processing.

Zack Ellerby received a PhD in Psychology and Cognitive Neuroscience from the University of Nottingham in 2018. He is a Research Fellow at the University of Nottingham School of Computer Science. His research interests include human judgements under uncertainty, and developing methods for the efficient capture and handling of uncertain data –​ in particular through elicitation of interval-valued responses and adapting methods of inferential statistics, with applications in cyber-security and other contexts. He is a member of the IEEE.