Meet the Demo participants

UU Crowd Simulation Research and Development - Towards making a city smarter
The increasing urbanization of the world population presents new challenges for decision makers. Real-time crowd simulation is crucial in addressing these challenges, including determining evacuation times in complex buildings, avoiding overcrowded areas during mass events, and improving the crowd flow in cities. Based on our research, we have developed a simulation framework with unique features that aim at realism, speed and accuracy. Our software is available for research and commercial use. We welcome researchers and companies to collaborate, e.g. to write joint project proposals or to integrate our framework into their products.

Our crowd simulation framework can deal with huge 3D multi-layered virtual environments. A filter pipeline extracts an efficient and flexible representation of the walkable areas which are then converted to a navigation mesh. This mesh is used by our framework through a generic five-level planning hierarchy. This enables the simulation of up to 60.000 autonomous and social pedestrians in real-time. The framework can be easily extended with new features, such as bicycles and density-based planning, thus allowing us to address current and future challenges in crowded cities.

We will show simulations developed for the Grand Départ of the Tour de France (July 2015, Utrecht), and evacuations studies for the Noord/Zuidlijn (2015-2016, Amsterdam). Next, we will show nicely rendered movies of intricate scenario’s. Participants will be able to play with our software.

More information 

 

OpenDC Demo

Our society depends today on computer systems. Numerous daily activities—from the operation of organizations of all scales, to modern governance, to consumers accessing various services—, are part of a Digital Economy worth tens of billions of euros annually and supporting millions of jobs.

In this new economy, massive computer systems, often grouped in datacenters, serve as factories “producing” cloud services with massive consumption.

To achieve the promise of this relatively new industry, we must overcome new scientific and engineering challenges. Moreover, we must address the new demands of training human resources for this complex field, focusing on both technical and collaborative skills.

Towards addressing these challenges and demands, we propose OpenDC, a collection of scientific methods, datacenter technologies and concepts, education practices, and software and data artifacts focusing on the design, operation, and use of modern datacenters.

 

Triangulating The Netherlands on the fly using a Spatial DBMSs

3D digital city models, important for urban planning, are currently constructed from massive point clouds obtained through airborne LiDAR (Light Detection and Ranging). They are semantically enriched with information obtained from auxiliary GIS data like Cadastral data which contains information about the boundaries of properties, road networks, rivers, lakes etc.

In this work we demonstrated a column-oriented SDBMS enhanced with a set of optimized operators to provide effective data skipping, efficient spatial operations, and interactive data visualization. Such features are exploited for 3D digital city models using latest topography of The Netherlands and the latest Cadastral information for The Netherlands. Through a web-interface exploiting X3D technology, the user requests 3D digital city models with predicates on the semantic attributes.

The demo has been presented at ACM SIGSPATIAL 2016.

 

A Big Data Approach for Monitoring of High Volume Semiconductor Manufacturing

In semiconductor manufacturing, continuous on-line monitoring of high volume manufacturing prevent production stop and yield loss. The challenges towards this accomplishment are : 1) the complexity of lithography machines which are composed of hundreds of mechanical and optical components, 2) the high rate and volume data acquisition from different lithography and metrology machines, and 3) the scarcity of performance measurements due to their cost.

This paper addresses these challenges by 1) visualizing and ranking the most relevant factors to a properly selected performance metric , 2) organizing efficiently Big Data from different sources and 3) predicting the performance with machine learning when measurements are lacking.

Even though this project targets semiconductor manufacturing, its methodology is applicable to any case of monitoring complex systems, with many potentially interesting features, and imbalanced datasets.

 

Improving Code Quality Education with Better Code Hub

During the demo we will present SIG's code metrics tool Better Code Hub (BCH). This online tool scans user submitted code on 10 guidelines of maintainability as defined in Joost Visser his book "Building Maintainable Software". Specific code snippets are presented to the user to highlight the points of improvement. At the same time the guideline, its rationale and tolerance are shown.

In the demo we will go through the complete process of analyzing the code. Following the GitHub workflow we identify the refactor candidates. Each code analysis results in a clear mark. Also, the book will be available for further reference and will show its direct connection to the tool. We will show all the steps for interested users to start using the tool themselves.

Connected to the demo we will present a poster of the related ongoing master thesis research concerning the measurement of the impact BCH has on programming courses. This focusses on the goals of transferring knowledge about programming and the improvement of the code quality of students' work without to much overhead for the teaching staff and students. The thesis will investigate in how far these goals can be accomplished with BCH by observing groups students participating in programming courses. We hope to gather critical feedback and spark the interest of other teachers.

 

A Knowledge-Based Decision Support System for Technology Selection in Software Products

Software producing organizations are faced with a multi-criteria decision-making (MCDM) problem to select the right technology (database management system, cloud platform, software design pattern, etc.) because a large number of decisions of a very similar kind has to make. In addition, the number of potential solutions (alternatives) and decision factors (functional and non-functional requirements) is significantly large.

Typically, there does not exist a unique optimal solution for such problems and it is necessary to use decision maker’s preferences to differentiate between solutions. To support decision makers at early stages of developing a software product to select the right technology according to their preferences, we have designed and implemented a model-based decision support system (MBDSS).

 

ControCurator: Human-Machine Framework For Identifying Controversy

In this paper the ControCurator human-machine framework for identifying controversy in multimodal data is described. The goal of ControCurator is to enable modern information access systems to discover and understand controversial topics and events by bringing together crowds and machines in a joint active learning workflow for the creation of adequate training data.

This active learning workflow allows a user to identify and understand controversy in ongoing issues, regardless of whether there is existing knowledge on the topic.

 

Photo Rank: Popularity Prediction of Photos in an Offline Collection

Every minute hundreds of thousands of photos are uploaded to the internet through various social medias and photo sharing platforms. While some photos get millions of likes, others may be totally overlooked.

This arises a question: Can we predict the popularity a photo will receive before it is uploaded? In this technical demonstration, we showcase a photo ranking method that considers the visual content of photos for predicting the popularity of each photo in an offline collection.

We use state-of-the-art deep learning methods for extracting visual features such as semantic concept, low-level, and visual sentiment features. We combine the effect of each of the features on predicting the popularity of photos and provide a ranking based on the predictions. Using a dataset of about 200k photos from Instagram related to 400 brands, we demonstrate that we can reliably predict the normalized like count of photos.

 

Uncertainty-Aware big data systems

The current use of systems for decision making based on big data (either in the form of data-driven analytics, or model-driven analytics) tends to overlook the fact that data selection and data collection steps make datasets intrinsically uncertain. Datasets may be biased, incomplete, noisy, heterogeneous, or ambiguous in their meaning. Models that are trained on these data will be unperfect, and the choice of a specific analytic approach might introduce new sources of uncertainties. Users may interpret the outcomes of analytics-based systems in a different way, leading to different decisions being taken using the same data. Finally, the definition of uncertainty itself goes beyond the concept of “error” and embraces concepts familiar to many different disciplines: statistical uncertainty, semantic uncertainty, psychological uncertainty, to name a few.

There is limited understanding of how this uncertainty component translates in impact on the decisions of the user. In the research line on uncertainty in the TNO research program “Making Sense of Big Data”, we develop ways to quantify and show the propagation of uncertainty through the whole system to make clear how well it really performs. Communicating this in a clear way helps the user make a better decision and builds trust in the system. Furthermore, if this system not only communicates the uncertainty on the outcome, but also shows the origin of that uncertainty, the user has an additional perspective upon which to base a decision. Such a system could even go one step further, and present the user with alternative options if it detects a high uncertainty. We call big data systems providing this capability “Uncertainty Aware” systems.

The demo will show how an Uncertainty Aware system could interact with the user through an interactive game. The game is based on a use-case from the surveillance domain and on a system based on computer vision and machine learning algorithms for detection of events in aerial video data.

 

DIVE+: Explorative Search in Integrated Linked Media

DIVE+ is a linked data digital collection browser, developed for providing an integrated, innovative and interactive access to objects from various heterogeneous online collections. The DIVE+ demonstrator extends the digital hermeneutics approach and uses events and event narratives as context for searching, browsing and presenting individual and groups of objects.

The innovative DIVE+ approach is many-fold: (1) integrates four heterogeneous cultural heritage collections (i.e., news broadcasts from the Netherlands Institute for Sound and Vision, radio bulletins from the Dutch National Library, cultural heritage objects from Amsterdam Museum and Tropenmuseum), (2) integrates links to external linked open datasets (i.e., DBpedia, AAT and ULAN), (3) intuitive way to deal with event narratives; and (4) provides automated crowdsourcing enrichment of multimedia collection objects with event annotations.

The innovative interface combines Web technology and theory of interpretation to allow for browsing the network of data in an intuitive "infinite" fashion.

 

Wireless Reprogramming of Transiently Powered Platform

In this demonstration we shall show how to wirelessly upload new firmware (or a large file) to a small embedded computer. The challenge is: the small computing platform used is battery-less and powered completely by ambient energy (radio frequency).

Therefore, these small amounts of energy, loaded to a small capacitor, make execution of any program problematic. We will demonstrate how to guarantee the code execution despite such frequent energy interrupts.

This demo is a result of two recent publications presented at IEEE INFOCOM 2016 and IEEE INFOCOM 2017 conferences and a collaboration between TU Delft and the University of Washington, Seattle, USA.

 

Rebel, a DSL for banking systems

This demo is based on an earlier demo in an industrial setting.

Large organizations like ING suffer from the ever growing complexity of their systems. Evolving the software in the face of change becomes harder and harder since a single change can affect a much larger part of the system than predicted upfront. A large contributing factor to this problem is that the actual domain knowledge is often implicit, incomplete, or out of date, making it difficult to reason about the correct behavior of the system as a whole. When domain knowledge is recorded it is captured in informal and possibly outdated documents (such as Word files, Excel sheets and Confluence pages), making it hard to relate the requirements to the actual implementation of the software. To tackle this problem we designed the Rebel specification language.

Rebel is a domain specific language for controlling the intrinsic complexity of software for financial systems. In collaboration ING and CWI developed Rebel and an Integrated Specification Environment (ISE), currently offering automated simulation, visualisation and model checking of Rebel specifications. These specifications can be used as a method of communication between stakeholders, to check the existing system implementations and, ultimately, to serve as a base to generate new systems. Specifications can be translated into Satisfiability Modulo Theories (SMT) constraints, solved using an SMT solver, and translated back into the Rebel ISE for interpretation.

In this demo session we report on our design choices for Rebel, the implementation and features of the ISE, our initial observations on the application of Rebel inside ING and the preliminary results of generating a correct, available and resilient implementations based on actor systems out of these specifications.

 

The rVEX Dynamically Reconfigurable VLIW Processor

The ρVEX is a computing platform developed at TU Delft. It is a dynamic platform targeting dynamic embedded workloads. Its goal is to continuously optimize the hardware for the running tasks. It does so by assigning computational resources (in the form of VLIW datapaths) to threads. These core adaptations can be performed in the order of 5 clock cycles, allowing a fine-grained level of tuning with low overhead. Additionally, this results in a very low interrupt response time, because the interrupt handler thread can be assigned resources as soon as it is triggered (no context saving is required, because several thread contexts are stored in the processor). On the application level, the platform can provide both high performance for single-threaded, heavy computation workloads (such as DSP and media processing) and high throughput for multi-threaded applications. Combined with the advantageous properties for real-time workloads (the core provides full performance isolation between threads when they are assigned datapaths), it is suitable for mixed-critical systems that exhibit a large diversity in their characteristics (ILP, intensity) and requirements (criticality, performance).

The Demo

The main demo consists of a ρVEX processor, prototyped on an FPGA, running randomly generated task-graphs of small identical workloads (depicted on the upper part of the screen as raytraced images). The lower part of the screen depicts the current core configuration (how the datapaths are assigned to the tasks), and a combined graph showing the power utilization and the core configurations over time.

In addition to the main demo showing the fully dynamic platform, we have a demonstrator for static image processing workloads (such as biomedical imaging applications) in which we utilize large numbers of small VLIW cores in a streaming organization. Each processor in a stream performs a processing step (e.g., a filter) and passes its output on to the next. The cores used in this demo are generated from the same codebase, but are configured (using VHDL generics) to a much smaller version in order to place a much larger number of cores (64 cores in this demo) onto the FPGA and to achieve a higher operating frequency (200+ MHz).

 

Nerdalize - Enabling research by making cloud computing easy and understandable for everyone!

Nerdalize is a startup with the goal to innovate cloud computing. Computers and (big) data play an increasing role in every research field. A lot of data is available and more and more algorithms and software are developed to contribute to computational research. As a result, many research institutes and engineering firms are no longer able to provide their researchers with enough computing power to do their work. Internal clusters are constantly occupied, so they are looking for ways to run their computations at a cloud provider. But which provider to choose? And how do you even run a computation in the cloud?

At Nerdalize we developed a platform to make it easy for people without a lot of IT-knowledge to run a computation at a cloud provider. This way we can give researchers access to all the computing power they need to do their work. Customers like Deltares and LUMC have already been using our platform to easily bring their hydro simulations and biological computations to the cloud.

Our goal is to facilitate the use of IT within research in order to enable researchers to deliver great results. I therefore believe that our platform would be a very good addition to your event and I would love to demonstrate it at the ICT.OPEN2017!

 

CrowdTruth: Human Computing for the Real World

In this demonstration paper, we introduce the CrowdTruth framework, a diversity harnessing approach for gathering annotated data from the crowd. Inspired by the simple intuition that human interpretation is subjective, and by the observation that disagreement is a natural product of having multiple people performing annotation tasks, CrowdTruth can provide useful insights about the task design, annotation clarity, or annotator quality.

We reject the traditional notion of ground truth in gold standard annotation, in which annotation tasks are viewed as having a single correct answer. We adopt instead a disagreement-based ground truth, we call CrowdTruth.

Considering the continuously growing demand for gold standard data in different domains and on different modalities, we believe CrowdTruth is of critical relevance to provide an innovative scientific methodology for deploying crowdsourcing in a systematic, reliable and replicable manner.

 

Exercise Games Designed for Rehabilitation of Elderly Patients after Hip Replacement Surgery

Patients who receive rehabilitation after hip replacement surgery are shown to have increased muscle strength and better functional performance. However, traditional physiotherapy is often tedious and leads to poor adherence. New technologies, such as exercise games may provide ways for increasing the engagement of the elderly patients and increase the uptake of rehabilitation exercises.

We present Fietsgame (meaning cycling game in Dutch), an interactive virtual exercise game system which translates existing rehabilitation exercises into fun exercise games.

The system connects exercise games with patients’ personal record and therapist control interface by Internet of Things server. That is, assigned workout data by physiotherapists can be loaded in the exercise game; the exercise game records patients’ workout data and sends the data directly to patients’ personal health record. According to patients’ health record, physiotherapists can report patients’ condition, e.g., direct help is necessary, extra attention is needed or the patients is ok.

Thus both the patients and the physiotherapists can monitor patients’ medical status. A usability test on the Fietsgame has been conducted on 7 elderly patients and 2 physiotherapists. The results showed that patients found the game system useful, easy to use and most felt, it would be a useful tool in their further rehabilitation and would like to use the game in the future. The therapists indicated that the exercise games meet the criteria of motor rehabilitation, and intended to continue using the game as a part of their rehabilitation treatment with patients.

Hence, we conclude that the Fietsgame can be used as an alternative tool to traditional motor rehabilitation for patients with hip surgery.

 

Holst Centre / Imec

Holst Centre is an independent R&D center that develops technologies for wireless autonomous sensor technologies and for flexible electronics, in an open innovation setting and in dedicated research trajectories. A key feature of Holst Centre is its partnership model with industry and academia based around shared roadmaps and programs. It is this kind of cross-fertilization that enables Holst Centre to tune its scientific strategy to industrial needs.

Holst Centre was set up in 2005 by imec (Flanders, Belgium) and TNO (The Netherlands) and is supported by local, regional and national governments. It is named after Gilles Holst, a Dutch pioneer in Research and Development and first director of Philips Research.

Located on High Tech Campus Eindhoven, Holst Centre benefits from, and contributes to, the state-of-the-art on-site facilities. Holst Centre has over 200 employees from some 28 nations and a commitment from over 40 industrial partners.

 

Demo on Scalable Key provisioning from Silicon to Cloud

Our demo presents a key provisioning method for the Internet of Things (IoT) based on SRAM Physical Unclonable Functions. This method removes the barriers of securing a broad range of IoT devices, even resource limited endpoints, building the foundation for an Internet of things we can trust.

SRAM Physical Unclonable Functions or PUFs use the behavior of standard SRAM memory available in any digital chip, to extract a unique pattern or ‘silicon fingerprint’. They are virtually impossible to clone or predict. This makes them very suitable for applications such as secure key generation and storage, device authentication, flexible key provisioning and chip asset management. Due to inherent deep sub-micron process variations in the production process, every transistor in SRAM cells has slightly random electric properties. This randomness is expressed in the startup values of uninitialized SRAM memory. These values form a unique chip fingerprint, called the SRAM PUF response.

In our demo a microcontroller on a development kit will be provisioned from a provisioning appliance (Laptop). The keys are generated on the microcontroller and the identity certificate is created on the provisioning appliance.

Our demo will show the following functionality: The asymmetric key pair (d,Q) that is derived from the SRAM PUF root key,is used for i) setting up a Certified Identity and ii ) providing a secure channel to the device by which Application Keys are provisioned.

In order to set up a Certified Identity for the device, a One-Time Trust (OTT) event is needed. It combines the generation of device root keys with a certification step by a trusted party in the supply chain. The OTT event

comprises the following steps: i) a public key corresponding to a PUF-derived private key gets exported from the security subsystem ii) a Trusted Party observes the key export event and signs the public key as part of a device’s Identity Certificate iii) the resulting Identity Certificate is stored in the device. This requires only one trusted party located in the supply chain, unlike the legacy model where multiple parties in the supply chain need to be trusted.

After a device has obtained an Identity Certificate as defined above, other parties can use the certificate to securely provision application keys. For example, an Application Provider (AP) provisions an application key to the device in the field with its Provisioning Server. First, the Provisioning Server validates the device certificate with the public key of the Trusted Party. Second, the Application Provider (AP) encrypts the application key with the device’s public key. Finally, the device receives and decrypts the application key with its private key inside the security subsystem. Then, it re-encrypts the application key with a symmetric root key for secure storage. As part of the second step the device may check a certificate of the application owner’s Provisioning Server.

The scheme above never exposes the device root key and its derived private key. They don't leave the device and are not known by any party in the supply chain. On top of this neither the OEM, nor the AP have to share any of their secrets with another party in the supply chain. In particular the SM does not have to handle any keys. This does not only reduce its liability but also simplifies its logistics. We also emphasize that as well the root key provisioning as the application key provisioning can be performed in the field. This removes any delay in the production of the chips as well as the devices, leading to higher yield and lower costs.

 

BYOD, Bring Your Own Device

Smartphones en tablets winnen in hoog tempo terrein bij het bedrijfsleven en de overheid. Mensen maken in toenemende mate privé gebruik van mobiele apparaten en willen die ook op de werkplek gebruiken.

Het toelaten van het eigen apparaat van de medewerker voor zakelijk gebruik noemen we BYOD, Bring Your Own Device. Ook indien het bedrijf het mobiele apparaat verstrekt en privé gebruik toestaat kunnen we spreken van BYOD. De technologie komt uit de consumentenmarkt en heeft zeer zeker niet het beveiligingsniveau dat nodig is voor gebruik binnen de rijksoverheid.

Hoe kunnen organisaties, en vooral die bij de rijksoverheid, dan toch op verantwoorde wijze smartphones en tablets invoeren?

Compumatica doet, in samenwerking met Defensie en Politie, onderzoek naar veilige verwerking van overheidsinformatie op (bedrijfseigen) smartphones en tablets. Onze conclusie is dat de huidige generatie apparaten acceptabel beveiligd kan worden, mits deze alleen voor bedrijfsmatige toepassingen worden gebruikt. Op dit moment zorgt dat nog voor minder gebruikersvriendelijkheid en onbegrip bij de gebruiker. Wij zien echter ontwikkelingen op het gebied van besturingssystemen en beveiligingsproducten voor smartphones en tablets waarmee op termijn een betere beveiliging haalbaar is, zonder in te leveren op gebruiksgemak en functionaliteit. De oplossing die wij in dit licht willen ontwikkelen en op de markt brengen is gebaseerd op hardware, wat nodig is om een kans van slagen te hebben om door de overheid goedgekeurd te worden voor het niveau Departementaal Vertrouwelijk, of zelfs Confidentieel. De Ministeries van Veiligheid en Justitie (GDI), De Nationale Politie (Innovatieteam met project VPK) en het Ministerie van Defensie (Kixs) hebben hun medewerking toegezegd in de vorm van het bespreken van de klantspecifieke eisen en wensen.