Seminare 2014

Wie in den vergangenen vier Jahren wird das CCCS auch 2014 während des Semesters monatliche Forschungsseminare durchführen. Die Seminare finden üblicherweise jeweils am ersten Dienstag der Monate März, April und Mai (Frühjahrssemester) sowie Oktober, November und Dezember (Herbstsemester) statt. Zusätzliche Extra-Seminare können arrangiert werden.

Wir sind momentan dabei, das gesamte Seminarkonzept zu überholen. Provisorisch ist ein Seminarraum im "Schnitz" an der Rosshofgasse (Petersgraben 51) reserviert; die Seminarvorträge finden wie bisher über Mittag statt (ca. 12:10-13:15 Uhr).


21. Februar 2014, Extra Seminar, Biozentrum, Raum 102, 11:15-12:00 Uhr
Prof. Jean-Philippe Thiran, Signal Processing Laboratory, ETHL & CHUV Lausanne.

Titel: Connectometry - towards global quantitative brain connectivity analysis by diffusion MR imaging.

In this talk I will discuss diffusion MR imaging as a tool for the in-vivo macroscopic study of brain connectivity. I will first explain how MR imaging can be used to measure diffusion in the human brain and how the diffusion phenomenon is related to white matter architecture. Several diffusion MR protocols will be presented, including Diffusion Tensor Imaging (DTI) and Diffusion Spectrum Imaging (DSI). Then I will introduce advanced image analysis techniques developed to infer information on brain connectivity from diffusion MR images, and how this can be applied to study global brain connectivity both in normal and pathological cases. Finally, I will discuss some of our very recent works on connectometry, i.e. on inferring microstructure of the white matter from Diffusion MR data.



04. März 2014: Garth N. Wells, Cambridge University (UK).

Titel: Techniques for high level, high performance scientific software

The development and use of a domain-specific language coupled with code generation has proved to be very successful for creating high-level, high-performance finite element solvers. The use of a domain-specific language allows problems to be expressed compactly in near-mathematical notation, and facilitates the preservation of mathematical abstractions. The generation of low-level code from expressive, high-level input can offer performance beyond what one could reasonably achieve using conventional programming techniques. Moreover, development time can be dramatically reduced and everyday HPC made accessible to domain experts. This presentation will summarise some recent developments in automated modelling for HPC, and a range of challenging modelling problems from different fields that use the presented tools will be shown.

Link auf das FEniCS Projekt (englisch)


01. April 2014: Alessandro De Vita, King's College, London (UK).

Titel: Materials modelling by 'Learn On The Fly' molecular dynamics methods

Modelling the catastrophic brittle failure of a crystalline silicon component, a rock specimen, or an advanced ceramic interface at the atomistic level requires a non-uniform-precision multi-scale treatment. Failure occurs as a response to the stress concentration determined by the large-scale geometry of the crack tip region, which alters the free energy landscape of bond-breaking processes at the advancing crack tip. In turn, the (catastrophic, or stress-corrosive) breaking of chemical bonds feeds-back into a larger scale by driving the dynamical evolution of the stress tensor field. In this talk, I will describe some examples where this “chemo-mechanical” coupling produces remarkable fracture phenomena such as propagation instabilities induced by isolated impurity atoms and chemically controlled crack propagation.
Quantum-accurate atomic forces on tip atoms are needed in these studies, while a quantum treatment of the whole system can not be afforded. This is a standard state of affairs in atomistic materials modelling: encoding the relevant information into a high-quality classical force fields is necessary, but no suitably general force field form is available, nor is a fitting database a priori guaranteed to contain the information necessary to describe all the phenomena encountered along the dynamics. I will argue that this situation forces the use of MD techniques capable of incorporating accurate QM information generated at run time during the simulations. This effectively creates a novel market for dynamical databases and specially tuned Machine Learning force fields which minimize the computing workload by performing QM subroutine calls only when a “chemically novel” configuration is encountered along the system’s trajectory. Preliminary results for one such “Learn On the Fly” scheme capable of learning/predicting atomic forces through a Gaussian process will be presented.

Prof. Alessandro De Vita, King's College London


06. Mai 2014: Krishna P. Gummadi, MPI für Software Systeme, Saarbrücken (D).

Titel: Extracting relevant and trustworthy information from microblogs.

Microblogging sites like Twitter have emerged as a popular platform for exchanging real-time information on the Web. Twitter is used by hundreds of millions of users ranging from popular news organizations and celebrities to domain experts in fields like computer science and astrophysics and spammers. As a result, the quality of information posted in Twitter is highly variable and finding the users that are authoritative sources of relevant and trust-worthy information on specific topics (i.e., topical experts) is a key challenge. I will attempt to address this challenge in this two-part talk.





07. Oktober 2014: Vizerektorat Forschung

Diss:Kurs - Tag des Doktorats an der Universität Basel, Kollegienhaus

Diverse Kurzpräsentationen von Doktorierenden aus sämtlichen Disziplinen. Weitere Angaben folgen im Sommer.


04. November 2014: S. Miyazaki, Critical Media Lab, FHNW Basel.

Titel: Experimentelle Datenästhetik (Audifikation/Sonifikation) - Eine Einladung zum Dialog

Als immanent gestalterisches Problem der Analyse von hochdimensionalen Daten erweist sich die Schwierigkeit die Datensätze so darzustellen, dass sie die darin enthaltene Informationen von den Forschenden erkannt und exploriert werden können. Für die Filterung und Extraktion erkennbarer Differenzen in den Daten wurden seit dem Aufkommen der Computer zahlreiche algorithmische Verfahren entwickelt. Obwohl sie stets effizienter und die Bildschirme immer größer wurden, stößt die Visualisierung nach wie vor an ihre Grenzen. Uns interessiert deswegen die Erweiterung der sinnlichen Darstellungsformen um das Akustische und die Frage, ob die "Augenarbeit" der Forschenden und "Rechenarbeit" des Computers durch die Ergänzung mit "Ohrenarbeit" neue Lösungen in Bezug auf dieses Darstellungsproblem bieten können.

Web: Critical Media Lab, FHNW


02. Dezember 2014: J. Copeland, Univ. of Canterbury, Christchurch (NZ)

Titel: The Imitation Game: Alan Turing and Artificial Intelligence

The field of Artificial Intelligence is commonly believed to have originated in the United States in 1956. In fact, the field's origins can be traced back 15 years earlier, to the wartime work of Alan Turing on the Enigma code at Bletchley Park, the British codebreaking headquarters. This lecture describes the evolution of Turing's thinking about machine intelligence, from his early investigations at Bletchley Park through to his famous 1950 publication 'Computing Machinery and Intelligence', where he set out his 'imitation game'. Now known simply as the Turing Test, this has been the target of a hail of objections from both computer science and philosophy. I argue that the leading objections in the literature miss their mark, being for the most part based on misunderstandings of Turing's subtle test.