Heinz Muehlenbein
Fraunhofer Institute AiS
Schloss Birlinghoven
53754 Sankt Augustin
Germany
  • heinz.muehlenbein@online.de

  • Scientific Curriculum
  • 1969 Start at computing center of GMD, telecommunication, time sharing operating systems
  • 1971 virtual memory management SIEMENS BS200, performance measurements
  • 1975 Ph.D in Numerical Mathematics, University of Bonn
  • 1975 Higher-Level protocols of computer networks, simulation system TOCS
  • 1980 Parallel processing, MUPPET programming environment of SUPRENUM
  • 1987 Natural Computation (Evolutionary Computation, Neural Networks)
  • 1995 Hand-Eye Robot JANUS
  • 1999 Mathematical foundation, Estimation of Distribution Algorithms 

  • Publications

  • Evolutionary Computation
  • Neural Networks and Robotics
  • Populärwissenschaftlicher Überblick (1995)


  • Hand-Eye Robot Janus


  • Hand-eye robot lab

  • Hand-eye robot demos


  • Evolutionary Computation
    Evolutionary computation is based on models of evolution. Unfortunately there exists no GREAT UNIFYING evolution theory, but there are theories dealing with particular aspects of evolution theory. I have used some of these theories to create different evolutionary algorithms. A survey of my research for discrete parameter optimization can be found in (research up to 2000) The following papers I consider as representative for my work. All algorithms are worthwhile to investigate.

    I.Models derived from Darwin's book

    II. Models derived from the science of breeding Breeders have developed an evolution theory based on statistics. Here we do not have natural selection, but the selection is done by breeders. This theory has had a major impact on improving the yield of crops and livestock. For function optimization this showed to be a promising model. III. Mathematical models (Estimation of Distribution)

    Classical population genetics is formulated in terms of gene distributions. I have used this abstraction together with new techniques developed in statistics to define a new family of algorithms, called Estimation of Distribution Algorithms. Instead of mutation and recombination of individual strings, a search distribution is estimated and sampled from.

    IV. Outlook of a unified evolution theory

    Three years ago I started working on a more unified evolution theory. It's task is not optimization, but understanding evolution, be it natural or artificial organisms.

    Major Publications

    Large combinatorial problems Continuous function optimization Some theoretical analysis Genetic programming (network representation) Technical Reports and Publications Applications of the BGA
    Neural networks
    JANUS hand-eye robot
  • The Janus Architecture for a Robot Brain
  • The JANUS Architecture for an Artificial Brain: Medical Perspective

  • Abstracts


  • The JANUS Architecture for a Robot Brain
  •   F.J. Smieja and H. Muehlenbein, December 13, 1990

    REFLEX internal paper

    In the evolution of the mammalian brain the two-sided nature of the neural motor control and sensory input has been preserved in the architecture of the cerebral matter. We believe this fundamental two-sided nature of the mammalian brain to hold a very important clue to the problem-solving capability of the higher mammals. The idea of a two-hemisphere architecture on the macro-level and modules of neural networks on the micro-level, with conflicts between the two, as observed in Nature, provides a basis for the next generation of artificial problem solvers, exemplified by our Janus architecture. This architecture consists of two halves that can independently process the same data and generate decisions about a change of the robot's state. The halves are connected by simple processing channels through which they can exchange information at various levels. The brain consists of neural networks at its lowest level, and so is adaptive, but the two sides process and learn about the environment in subtly different ways. The Janus architecture is described here in a top-down modular fashion, leading to the presentation of the model for a simple prototype system, Janus~I, which is to be developed through simulation. Janus~I is a first step into ``reflective'' architectures, whereby the system internally reflects about the capabilities and reliability of its modules.

    The JANUS Architecture for an Artificial Brain: Medical Perspectives

     
    F.J. Smieja and H. Muehlenbein, March 22, 1991

    (GMD report 635)

    In the evolution of the mammalian brain the two-sided nature of the neural motor control and sensory input has been preserved in the architecture of the cerebral matter. We believe this fundamental two-sided nature of the mammalian brain to hold a very important clue to the problem-solving capability and method of the higher mammals. The JANUS idea for the design of a robot's brain is to have a two-hemisphere architecture on the macro-level and self-assessing modules of neural networks on the micro-level. At the highest architectural level there is the possibility of conflicts between the opinions of the two hemispheres, which can independently process the same data and generate motor decisions. The halves are connected by a simplified analogue of the Corpus Callosum, through which they can exchange information at various architectural levels. In this paper the JANUS architecture is motivated and described, and we explain how we expect to use this medically-inspired model to contribute to our understanding of neurophysiological rehabilitation, through construction of a visuo-manual prototype, which has two eyes and two arms.

    Reflective Modular Neural Network Systems

     
    F.J. Smieja and H. Muehlenbein, March 13, 1992

    (GMD report 633)

    Many of the current artificial neural network systems have serious limitations, concerning accessibility, flexibility, scaling and reliability. In order to go some way to removing these we suggest a reflective neural network architecture. In such an architecture, the modular structure is the most important element. The building-block elements are called MINOS modules. They perform self-observation and inform on the current level of development, or scope of expertise, within the module. A Pandemonium system integrates such submodules so that they work together to handle mapping tasks. Network complexity limitations are attacked in this way with the Pandemonium problem decomposition paradigm, and both static and dynamic unreliability of the whole Pandemonium system is effectively eliminated through the generation and interpretation of confidence and ambiguity measures at every moment during the development of the system. Two problem domains are used to test and demonstrate various aspects of our architecture. Reliability and quality measures are defined for systems that only answer part of the time. Our system achieves better quality values than single networks of larger size for a handwritten digit problem. When both second and third best answers are accepted, our system is left with only 5 % error on the test set, 2.1 % better than the best single net. It is also shown how the system can elegantly learn to handle garbage patterns. With the parity problem it is demonstrated how complexity of problems may be decomposed automatically by the system, through solving it with networks of size smaller than a single net is required to be. Even when the system does not find a solution to the parity problem, because networks of too small a size are used, the reliability remains around 99--100 %. Our Pandemonium architecture gives more power and flexibility to the higher levels of a large hybrid system than a single net system can, offering useful information for higher-level feedback loops, through which reliability of answers may be intelligently traded for less reliable but important ``intuitional'' answers. In providing weighted alternatives and possible generalizations, this architecture gives the best possible service to the larger system of which it will form part.

            Multiple Network Systems (Minos) Modules: Task Division and Module Discrimination

                  F.J. Smieja, April 1991

                (GMD report 638)

                Proceedings of the 8th AISB conference on Artificial Intelligence, Leeds

    It is widely considered an ultimate connectionist objective to incorporate neural networks into intelligent systems. These systems are intended to possess a varied repertoire of functions enabling  adaptable interaction with a non-static environment. The first step in this direction is to develop various neural network algorithms and models, the second step is to combine such networks into a  modular structure that might be incorporated into a workable system. In this paper we consider one aspect of the second point, namely: processing reliability and hiding of wetware details.  Presented  is  an architecture for a type of neural expert module, named an Authority. An Authority consists of a number of Minos modules. Each of the Minos modules in an authority has  the same processing capabilities, but varies with respect to its particular specialization to aspects of the problem domain. The Authority employs the collection of Minoses like a panel of experts.  The expert with the highest confidence is believed, and it is the answer and confidence quotient that are transmitted to other levels in a system hierarchy.

     The Pandemonium System of Reflective Agents

      Frank Smieja, March, 1994

    GMD technical Report No. 794

    REFLEX Report No. 1994/2

    IEEE Transactions on Neural Networks (1996) 7(1):97-106

    The Pandemonium system of reflective MINOS agents solves problems by automatic dynamic modularization of the input space. The agents contain feed-forward neural networks which adapt using the back-propagation algorithm. We demonstrate the performance of Pandemonium on various categories of problems. These include learning continuous functions with discontinuities, separating two spirals, learning the parity function, and optical character recognition. It is shown how strongly the advantages gained from using a modularization technique depend on the nature of the problem. The superiority of the Pandemonium method over a single net on the first two test categories is contrasted with its limited advantages for the second two categories. In the first case the system converges quicker with modularization and is seen to lead to simpler solutions. For the second case the problem is not significantly simplified through flat decomposition of the input space, although convergence is still quicker.



    Open Worlds, Reflective Statistics and Stochastic Modelling

      H. Muehlenbein, July 26, 1994

    REFLEX Report No. 1994/14

    The real world is open and ambiguous. The problem of its openess has been neglected in science for a long time, especially in artificial intelligence. Most researchers in artificial intelligence still deal with closed worlds. A recent example is the CYC project from Lenat which started in 1984. Lenat believed that after entering about 10 million facts into CYC, that ``CYC will grow by assimilating textbooks, literature, newpapers etc.''. Now, in 1994 it turned out, that CYC has hardly enough knowledge for a small artificial domain in VLSI design. Openess is a deep problem, it has to be taken seriously. The real world is not completely knowable. In fact, the domain of knowledge is very small compared to the huge unknown domain. Any system operating in the real world has to act with incomplete knowledge. This general observation has far reaching implications for probability theory and statistical inference. The theoretical discussion in these areas culminated in Popper's famous sentence: ''All knowledge is assumption knowledge''. Every knowledge formulated as a hypothesis is preliminary, subject to rejection if new data is in contradiction to the hypothesis. In more technical terms this means for statistical inference: Probabilistic hypotheses do not have a hypothesis probability. It is possible to rate a number of hypotheses according to how good they explain the data, but it is not possible to compute a number which can be interpreted as the probability of a given hypothesis.

    Algorithms, data and hypotheses -Learning in open worlds-

      H. Muehlenbein

    REFLEX Report No. 1995/4

    This paper contains an informal discussion about how to synthesize reasonable hypotheses from data. This is a fundamental problem for any system acting in the real world. The problem consists of three interconnected subproblems: fitting the past data to a hypothesis (model), selecting promising new data in order to increase the validity of the hypothesis, and selecting a hypothesis in a class of hypotheses (models). We argue that molecular electronics may be important for the development of such systems. First, it provides the computing power needed for such systems. Second, it can help in defining a new computational model urgently needed for the design of an artificial systems synthesizing hypotheses about processes of the real world.