Fast and Limitless

Awesome IT

Awesome IT symposium offers a day of interesting lectures, discussions and fun demonstrations

10 April 2015

Developing IT and analyzing IT is always about the future. What are the possibilities and what are the constraints for future technology? This year’s Awesome IT focusses on the speed and capabilities of computers and computersystems. On April 10th the 5th edition of Awesome IT, themed 'Fast and Limitless', will take place in Amsterdam.

During the conference, distinguished speakers from different fields will give their views on the theme. How fast can computers get? How do we deal with growing data? The day will be all about new IT solutions for current problems. Subjects like CPU architectures, new operating systems and advanced programming languages will be covered. A day packed with lectures, discussion and -of course- free lunch will be held in Studio/K.

Programme

09:30 - 10:00 Doors open
10:00 - 11:00 Harry Buhrman
Quantum Computing: Facts and Fiction
William Louth
Software Memories and Simulated Machines
11:00 - 12:00 Clemens Grelck
Single Assignment C: High Productivity meets High Performance
Lightning talks #1
Pieter Hijma
Fast and Limitless on Multiple Levels of Abstraction

Arianna Bisazza
Breaking language barriers with statistical machine translation

Daan Odijk
Feeding the Second Screen: Semantic Linking based on Subtitles

12:00 - 13:00 Lunch break
13:00 - 14:00 Ivan Godard
Mill CPU - Vectorization and Hardware Threading
Frank Takes
Network Science: Reasoning about a highly connected world
14:00 - 15:00 Ivan Godard
Mill CPU - Interprocess Communication
Lightning talks #2
Peter O'Connor
Two Bandwagons Collide - Deep Learning meets Neuromorphic Engineering

Merijn Verstraaten
Lost in a Maze of Vertices All Unalike

Wendy Gunther
In between Business and IT

15:00 - 15:15 Coffee break
15:15 - 16:15 Sander Bohté
Natural AI: from Neuroscience to Deep Learning
Carsten Munk
Jolla Mobile Operating Systeem
16:15 - 17:00 Drinks

Speakers

Click on a speaker for more info.

  • Ivan Godard
  • Carsten Munk
  • Clemens Grelck
  • Harry Buhrman
  • Sander Bohté
  • Frank Takes
  • William Louth
  • Pieter Hijma
  • Wendy Gunther
  • Merijn Verstraaten
  • Peter O'Connor
  • Arianna Bisazza
  • Daan Odijk

Ivan Godard

Ivan Godard

Ivan Godard has designed, implemented and led the teams for 11 compilers for a variety of languages and targets, an operating system, an object-oriented database, and four instruction set architectures. He participated in the revision of Algol68 and is mentioned in its Report, was on the Green team that won the Ada language competition, designed the Mary family of system implementation languages, and was founding editor of the Machine Oriented Languages Bulletin. He is a Member Emeritus of IFIPS Working Group 2.4 (Implementation languages) and was a member of the committee that produced the IEEE and ISO floating-point standard 754-2011.

Ivan is currently CTO at Mill Computing. Mill Computing has developed the Mill, a clean-sheet rethink of general-purpose CPU architectures. The Mill is the subject of this talk.

Abstract

The talk will describe some aspects of the new Mill general-purpose CPU architecture that have not been previously disclosed.

The Mill is the first major advance in CPU architecture in decades. It uses a wide-issue and variable-length instruction format that can carry over thirty independent MIMD operations per instruction, in a layout somewhat similar to six VLIWs side-by-side. There are two program counters, one of which runs backwards. The execution unit is statically scheduled with exposed pipeline and no general registers, and achieves a typical IPC of 6 in open code, and 20 or more in loops. Memory references need not have backing DRAM, and can hide cache misses despite the static scheduling. Although there are no privileged operations and no supervisor mode, stack-smashing attacks are impossible, and clients and services (including the OS) are fully protected from each other. Some details may be found at MillComputing.com/docs.

The talk is in two parts. The first part will begin by reprising the Belt and Backless Memory, two previously described aspects of the Mill that are critical to understanding the new material. The talk will then present Spread Vectors and Skyline Vectors, the Mill's way to compute with variable-length SIMD vectors. The Mill family member "Gold" will be the example: Gold supports all power-of-two vector sizes from 16 to 1024 bits, with all scalar element sizes from one through 16 bytes.

The second part of the talk will be devoted to the Mill's hardware micro-threading. Spawning a thread, dispatching one for execution, idling it, killing it, and even such apparently unrelated facilities as setjmp/longjmp are all user-mode hardware operations on a Mill, with costs comparable to that of a normal function call. The talk will describe in detail how this works, with suggested uses from micro-kernel operating systems and parallel languages like Go.

Carsten Munk

Carsten Munk

Carsten Munk is Chief Research Engineer at Jolla, strongly passionate about open source and was previously involved with the MeeGo project by Intel and Nokia. Jolla was born in 2011 out of the passion of its founders towards open innovation in the mobile space.

Jolla is about offering a true alternative to the big players in the mobile industry. Our revolutionary Jolla smartphone, and the upcoming Jolla Tablet both run on the open and distinctive mobile operating system Sailfish OS, which was built on the heritage of MeeGo, an open source operating system formerly developed by Nokia among others. Our aim is to be open, independent, and transparent in everything we do. DIT – doing it together is in our hearts. Our developer and fan communities are an integral part of the way we operate, how we develop things and move forward. We listen, and we take feedback. Without our community, Jolla would not exist.

Abstract

In this talk you'll hear how the start-up Jolla managed to build a mobile OS and a mobile phone with just 100 employees. Staffed with mostly ex-Nokia developers, we managed to build, launch and ship a non-Android mobile device with our own mobile operating system in a lot shorter time than established vendors have ever been able to. Beyond the story I'll speak details on how a modern mobile operating system is built up - not only from a technical point of view, but how you make a great product and sell it and work with your customers.

Clemens Grelck

Clemens Grelck

Clemens Grelck is a University Lecturer in the Systems and Network Engineering lab at the University of Amsterdam, Netherlands. He obtained his PhD in 2001 from the University of Kiel, Germany, and held academic positions at the Universities of Kiel and Luebeck in Germany as well as the University of Hertfordshire in the United Kingdom. Clemens' research interests are in the areas of high-level, parallel programming, the design of advanced (declarative) programming languages and concepts and their efficient implementation on today's ubiquitous parallel systems through highly optimising compilers and adaptive runtime systems. He co-leads the development of the functional array programming language SAC (Single Assignment C) as well as that of the stream-based coordination language and component technology S-Net and is responsible for all Amsterdam activities in either project.

Abstract

Single Assignment C: High Productivity meets High Performance

Parallel Programming has long closely been tied to high performance computing. Today's ubiquity of multi-core chip architectures radically changes this: parallel programming moves from a niche market into the main stream of computing. At the same time, hardware becomes more and more diverse: varying numbers of cores with complex cache hierarchies, general-purpose graphics accelerators and other accelerator architectures like Intel's Xeon Phi, all with their specific, more or less machine-oriented programming models, challenge today's and even more so tomorrow's programmers. These rapid changes concern experienced HPC programmers and average software engineers alike. Since traditional software no longer automatically benefits from hardware innovation, new programming models are needed that reconcile productivity, portability and performance in the presence of modern heterogeneous compute architectures.

Single Assignment C (SAC) is an implicitly data parallel high-productivity language that combines purely functional, side-effect free semantics with a syntax familiar to imperative programmers. SAC features stateless, multi-dimensional arrays as first class objects with fully automatic memory management. Two features are characteristic for array processing in SAC: firstly, the ability to write code that abstracts not only from the extent of arrays along individual dimensions but likewise from the number of dimensions, and, secondly, an (almost) index-free style of programming that treats arrays in a holistic way rather than as loose collections of elements. Advanced compilation technology, that aggressively exploits the functional semantics of SAC for code transformation, targets a range of contemporary multi- and many-core architectures, thus effectively virtualizing the diversity and heterogeneity of today's many-core architecture zoo.

We introduce the essential language design concepts of SAC and demonstrate how SAC supports programmers to write highly abstract, reusable and elegant code. We discuss the major challenges in compiling SAC programs into efficiently executable code across a variety of multi and many-core architectures and show some performance figures that demonstrate the ability of SAC to achieve runtime performance levels that are competitive with machine-oriented, industry-strength programming environments. We conclude with some remarks on the fun, the pain and the pitfalls of programming language design.

Harry Buhrman

Harry Buhrman

Harry Buhrman is head of the research group ‘Algorithms and Complexity’ at the Centrum Wiskunde & Informatica, which he joined in 1994. Since 2000 he also has a joint appointment as full professor of computer science at the University of Amsterdam. Buhrman's research focuses on quantum computing, algorithms, complexity theory, and computational biology. In 2003 he obtained a prestigious Vici-award and was coordinator of several national and international projects. The unifying theme through the work of Buhrman is the development of new algorithms and protocols, as well as establishing their optimality. Buhrman is editor of several international journals and is member of various advisory and scientific boards, such as the advisory board of the Institute for Quantum Computing (Waterloo, Canada).

Abstract

Quantum Computing: Facts and Fiction

Unbreakable security, unprecedented computational power, more efficient communication: the potential of quantum computing appears to be enormous. The quantum computer makes use of quantum mechanical effects, that allow one to do computations in a fundamental new way. I will give a brief introduction and overview of quantum computing and information processing. I will also discuss the shortcomings of quantum computers and the major challenges that lie ahead.

Sander Bohté

Sander Bothe

Dr Sander Bohte develops computational models to help understand the mechanisms that underly neural information processing in nature, and bring these mechanisms to fruit as modern neural networks. He obtained his MSc in physics at the UvA, after which he obtained his PhD at CWI on the topic of spiking neural networks. He worked as a PostDoc at the University of Colorado at Boulder, and then returned to CWI to become a member of the scientific staff. He currently head the neuroinformatics effort in the CWI Life Sciences group, where his research focuses on efficient spiking neural network and biologically plausible neural reinforcement learning.

Abstract

Natural AI: from Neuroscience to Deep Learning

In the last three years, deep learning techniques have demonstrated breakthrough performance on critical "narrow AI" problem, like object, face and speech recognition, speech recognition, and also in areas like natural language generation. Now pursued actively within industry, deep neural networks using deep learning have even demonstrated super-human performance on some of these problems. With myriad applications, the impact of these breakthroughs will be felt with near certainty throughout society in the immediate future. What will be next however? Are we successfully reverse engineering the brain? To answer that question I will discuss the relationship between deep learning and neuroscience, the questions that we are just starting to answer or aim to anseer, and why these are likely relevant for AI.

Pieter Hijma

Pieter Hijma

Pieter Hijma is a Ph.D. student in the group of Henri Bal at the VU University working on parallel programming languages for many-core devices. His research focuses on the tension-field between on the one hand high-level abstractions, necessary for programmability and portability, and on the other hand low-level abstractions, needed for high-performance.

Abstract

Fast and Limitless on Multiple Levels of Abstraction

In this talk I present two systems that I developed during my Ph.D.: Many-Core Levels (MCL) and Cashmere. MCL focuses on writing kernels for many-core devices such as GPUs and offers multiple levels of abstraction to provide programmers a trade-off between high-level and low-level programming. The other system Cashmere incorporates MCL and adds divide-and-conquer tecnhniques to write efficient and scalable applications for highly heterogeneous supercomputers, supercomputers with many different types of many-core devices. The talk provides a view on the cost of having no limits in your abstractions and shows how fast these systems are.

Wendy Gunther

Wendy Gunther

Wendy obtained her bachelor's degree in Computer Science from the University of Amsterdam in 2012. As she got more interested in the way IT is used within organizations, she decided to pursue the master ICT in Business at Leiden University. She completed this master Summa Cum Laude in 2014 and is now a PhD candidate at the Vrije Universiteit, Faculty of Economics and Business Administration. As part of the Knowledge, Information and Innovation research group, she focuses on (big) data-driven business model innovation.

Abstract

In between Business and IT

As an interdisciplinary researcher, Wendy will focus on students' possibilities after having finished a bachelor in Computer Science or a related field. She will talk about life as a PhD candidate at the VU, but also about her experience gained and lessons learned while getting there.

Frank Takes

Frank Takes

Frank Takes is a lecturer and researcher at the computer science department of Leiden University. His main research interest is in algorithms and measures for the analysis of large graphs, with (online) social networks as one of the application domains. This includes the study of algorithms for solving combinatorial problems in graphs as well as investigating techniques related to knowledge discovery and data mining specifically for network datasets. Currently, he works on various data science research projects in cooperation with, amongst others, the Dutch National Police. In Leiden, he teaches two computer science courses on Business Intelligence and Social Network Analysis.

Abstract

Network Science: Reasoning about a highly connected world

We live in a connected world. In our social and digital lives, we are confronted with networks (or graphs) on a daily basis. When someone tells a story, it is likely that this story passed through various other people that together form a network of social interactions. Online social networks such as Facebook are based on gigantic networks in which people are connected trough so-called friendship links. Browsing Wikipedia means traversing a large network of pages that is connected via clickable (hyper)links. Accessing one webpage on a mobile phone creates a few dozen wired or wireless connections between devices in a matter of microseconds. Networks are everywhere around us, and influence the way in which we communicate, socialize, search, navigate and consume information. When networks are stored in a digital format, they can produce an enormous amount of data. Within the field of computer science, we specifically consider tasks related to storing, retrieving, manipulating and understanding this network data in an automated and efficient way. This talk gives a broad overview of the field of network science, an exciting research area that deals with analyzing, reasoning and understanding the highly connected world that we live in.

Merijn Verstraaten

Merijn Verstraaten

Merijn Verstraaten is a Ph.D. student in the Systems & Network Engineering group at the UvA. His research focuses on exploiting fine-grained parallelism and NUMA systems for large-scale graph processing, such as using GPGPU and accelerators. His long-term plan is to replace the CS curriculum everywhere with programming language theory and math when no one's looking.

Abstract

Lost in a Maze of Vertices All Unalike

In this talk I give a quick introduction to graph processing and the fundamental problems that occur when we try to map large-scale graph processing algorithms to modern distributed, NUMA and GPGPU systems.

Afterwards I'll cover some of my ideas on how to mitigate the load-balancing, latency, and memory restriction issues raised by the above platforms.

Peter O'Connor

Peter O'Connor

Peter O'Connor is a PhD candidate in the Intelligent Systems Lab at University of Amsterdam, and works on implementing efficient sampling methods for learning deep networks. Previously he has worked at Brain Corporation and the Institute of Neuroinformatics in Zurich. He is interested in developing algorithms for doing Bayesian learning and inference in neural networks.

Abstract

Two Bandwagons Collide - Deep Learning meets Neuromorphic Engineering

The field of Deep Learning has taken off lately, with deep neural networks winning competitions in many different areas of machine learning. Meanwhile, researchers in neuromorphic engineering have been building computer chips that attempt to mimic the organization of the brain. These fields are converging fast - and we will show the results of some work that helps to bridge these two disciplines.

Arianna Bisazza

Arianna Bisazza

Arianna Bisazza is a post-doc researcher in the Information and Language Processing Systems group of the University of Amsterdam. Her work focuses on the statistical modeling of natural languages, with the prime goal of improving the quality of machine translation and speech recognition of challenging languages. Before joining UvA she obtained her PhD from the University of Trento, Italy, in 2013. She has spent research periods at Microsoft Research and Dublin City University, and participated in the development of one of the most widely used open-source machine translation platforms.

Abstract

Breaking language barriers with statistical machine translation

Imagine an Internet that users can seamlessly navigate in their native languages. Thanks to the advances of statistical machine translation (SMT), this vision is already a reality but not for everyone. In fact, while more and more languages are being served by online engines like Google Translate, SMT quality is excellent in only few language pairs, like French-English, but remains poor in most of the others. In this talk I will explain how state-of-the-art SMT works and what are promising research directions to improve it in a wide range of language pairs.

William Louth

William Louth

A renowned software engineer with particular expertise in self adaptive software runtimes, adaptive control, self-regulation, resilience engineering, information visualization, software simulation & mirroring as well as performance measurement and optimization. Developed the first adaptive multi-strategy based dynamic metering solution that addressed the many challenges in measuring the performance of low latency applications in particular gaming and trading platforms. The Architect of “The Matrix for Machines” – a scalable discrete event simulation engine that replays, in near real-time, the execution behavior and resource consumption of metered activities across an entire infrastructure of instrumented application runtimes within a single simulated mirroring runtime.

Abstract

Software Memories and Simulated Machines

Software has memory but no memories. But what if software had the ability to recall and with it the ability to play oat episodic (behavioral) memories time and time again in a different space – a simulated mirror world? What if software machines could see each other act, much like humans do, without the machine code needing to send a message or make a call? What if we created a matrix for machines that allowed us to extend and augment software post-execution, irrespective of source language, runtime and platform? In this talk I will describe the realization of this vision on the Java platform and how the search for a model to bridge the real and simulated machine worlds lead to the discovery of a unifying framework for understanding both man and machine activity and interaction that could change how we design, develop, deploy, monitor and manage large scale software systems. The future of software will be simulated, as will be the past and present...eventually.

Daan Odijk

Daan Odijk

Daan Odijk is a PhD candidate in the Information and Language Processing Systems group of the University of Amsterdam. His work focusses on semantic search and its application. Daan is a graduate of the UvA and worked as a scientific programmer and high school teacher before starting his PhD.

Abstract

Feeding the Second Screen: Semantic Linking based on Subtitles

Television is changing. Increasingly, broadcasts are consumed interactively. This allows broadcasters to provide consumers with additional background information that they may bookmark for later consumption. To support this type of functionality, we consider the task of linking a textual streams derived from live broadcasts to Wikipedia. While link generation has received considerable attention in recent years, our task has unique demands that require an approach that needs to (i) be high-precision oriented, (ii) perform in real-time, (iii) work in a streaming setting, and (iv) typically, with a very limited context. In this talk I present a learning to rerank approach that significantly improves over a strong baseline in terms of effectiveness and whose processing time is very short.

Organisation

We are a commission of bachelor students and a PhD student. Our goal is to organize a nice, extraordinary and informative day, while highly different from a usual day at the university. Below are the people listed you can contact for questions: