Skip to main content

Archives: All Ads

All Ads

Attribution of responsibility in the AI value creation network

Requested Partner: Industry & Public

Short Description: We would like to develop an applied research project addressing the following two research questions: i) What are the ethical, environmental, and social responsibilities of individual actors within the AI value chain?, and ii) How can Swiss companies successfully manage their ethical, environmental, and social responsibilities within the AI value network?

To answer such questions, we set three objectives:

1) Analyzing the AI value creation networks of Swiss companies.

2) Developing a concept that encompasses the requirements for ethical, environmental, and social responsibility within the AI value creation network.

3) Prototyping a consulting service including a maturity model and a training concept that will provide a comprehensive approach to responsibility in the AI value creation network.

Deadline: Please send your submissions with the Application Form to until May 15, 2024.

Continuous monitoring of thermal solar panels’ efficiency


The proper operation of thermal solar panels (TSP) is an issue as the maintenance (if any, in most of the cases: none) is usually performed once a year. The deterioration of the installation is not apparent to the owner as TSP is only a supporting system. In fact, Domestic Hot Water (DHW) is still available, but the heating energy needed rises because the inflow of (cold) water is not pre-heated by the solar system.

There are above 25’000 villas in Geneva and in Switzerland above 1 million (according to Statistic Office of Swiss Confederation). There is no specific data on villa equipped with TSP but at least 10% should. Laws on energy’s efficiency has already been passed in some cantons. This will further push the installation of such systems.


  1. Define the 5 to 7 key parameters and their weighting to calculate the rated efficiency of the TSP installation (orientation, angle, cleanliness, shadows age, glycol quality, etc.) so that it can be compared with actual production. (Producer data are far too optimistic as they are obtained in laboratories).
  2. Put together the necessary low-cost kit to allow continuous data monitoring able to detect deterioration of the TSP’s efficiency. This kit should not cost more than 500 CHF (without installation)
  3. Define the business model to enter into the villa market and define a USP for the residential building. The revenue model should derive from yearly contract revenue. The automated data management should allow to have a low annual fee (almost no variable cost) and the business development should come from volume. To get this volume in a reasonable time it is important to define the adequate distribution model.

Data Management

Collected data will be directed to already existing SCADA system (Ignition). This system allows data management, alarm algorithms and automated reporting.

Technical pre-requisites

  • A sample of 20 installations should be sufficient to get a fair approach to rated efficiency
  • The low cost kit should be composed of:
    • Gateway (Mbus or LoRawan)
    • Water meter with Mbus or LoRawan
    • Interface to solar control unit

Digital Onboarding for Small and Medium-Sized Companies: A Toolbox Approach

Requester: Berner Fachhochschule (BFH)

In today’s rapidly evolving business landscape, the onboarding process has emerged as a crucial aspect of organizational success. With the advent of remote work and virtual teams, new challenges have arisen, necessitating the development of structured onboarding processes that incorporate digital tools. However, the integration of analog and digital tools to effectively assimilate new hires remains a persistent challenge for organizations. With our work we want explore the significance of onboarding in SME, elucidate the challenges posed by remote work, and propose potential strategies for integrating analog and digital tools in the onboarding process.

Effective onboarding of new hires has long been recognized as a critical factor in organizational success. It ensures a smooth transition for new employees and facilitates their assimilation into the organizational culture and workflow. However, with the paradigm shift towards remote work and virtual teams, the traditional onboarding process has encountered new hurdles, necessitating the adoption of novel strategies and digital tools. New work arrangements offer flexibility and access to talent pools beyond geographical boundaries. However, they also present unique challenges in terms of team collaboration, communication, and fostering a sense of belonging. Addressing these challenges requires a reimagining of the onboarding process.

Leveraging digital technologies for employee onboarding creates complexities that differ from traditional in-person onboarding. The absence of physical presence makes it challenging to establish personal connections (commitment and organizational citizenship behaviour) and convey the organizational culture effectively. Additionally, the reliance on digital communication platforms raises concerns about information overload, misinterpretation, and reduced non-verbal cues. Consequently, organizations must adapt their onboarding processes to ensure remote hires receive adequate support and integration. When adequately orchestrated and integrated in organizational processes, virtual meeting platforms, collaborative software, and knowledge-sharing tools provide avenues for effective communication, training, and engagement. By leveraging these tools, organizations can create interactive and immersive experiences that simulate in-person interactions, fostering stronger connections and facilitating the assimilation of new hires.

However, to bridge the analog-digital divide, organizations must carefully blend both types of tools in their onboarding processes. A hybrid approach can leverage the benefits of face-to-face interactions while harnessing the efficiency and accessibility of digital tools.

The challenges posed by the combination of analog and digital tools necessitate innovative strategies. By implementing a structured onboarding process that incorporates digital tools effectively, organizations can foster a sense of connection, enhance employee engagement, and lay the foundation for long-term success in a digital-first era.

The project aims to develop a toolbox of digital solutions in order to empower SMEs to effectively integrate new hires. Through empirical research and a focus on specific SME needs, this study seeks to enhance the onboarding experience and promote organizational success.

In order to carry out the project in a practical manner, continuous collaboration with potential industry partners is essential. This collaboration allows for the identification of pain points and the gathering of valuable insights from previous experiences. By engaging in this exchange, targeted research can be conducted, leading to the development of practical and feasible solutions.

In a first step, we are actively seeking companies interested in participating in a shaping workshop to share their knowledge and insights regarding the use of digital tools in the onboarding processes. This workshop will provide an opportunity for participants to discuss the challenges they face and generate initial ideas on how to effectively address these challenges. We encourage companies to join us in this collaborative session to foster knowledge sharing and collectively enhance the onboarding experience.

In a second step, we are looking for firms as implementation partners to co-develop initial concepts and possibly prototypes. Firms could be specialized in HR, or HR consulting, or providing systems/development to support HR processes.

Improving behavior in front of screen using AI

Intelec Artificial Intelligence GmbH (Intelec AI)

Idea description
We all spend more and more time in front of screens. Sitting in front of the screen in the correct position and keeping minimum distance between your eyes and screen are crucial in staying healthy and avoiding developing short-sightedness. The problem is that a lot of people, especially children, don’t follow these rules.

We propose developing a software, which observes our sitting position and distance from the screen using an on-screen camera and notifies us if we do it wrong. This way the technology can help us build good screen habits and protect our health.

We are looking for a partner who can collaborate with us to build an app for testing the above idea. Ideally, the partner can work on developing an android mobile application (user interface) and Intelec AI can develop the image analysis part of the app.

Main challenges are:

  • running a background app with “always on” camera which consumes as little energy as possible
  • respecting user privacy
  • the app should work for users of different age, race, and gender
  • the app should work for wide variety of cameras and lighting conditions

Requested Support
We are looking for research partners who have experience around the challenges listed above and are interested in implementing them. This also includes the possibility to post theses at MSc level in this regard.

TalkDoc: the AI-Powered Documentation (by Cross-ING)

Idea description
In today’s fast-paced business world, efficient documentation is key to maintaining a competitive edge. Our
idea is to develop TalkDoc, an AI-powered documentation assistant that streamlines the creation, organization, and management of a company’s internal and external documents. We envision a cutting-edge Large Language Model (LLM) that is fine-tuned on a company’s specific documentation and hosted locally to ensure security and data privacy.

1. Tailor made: fine-tuned to company’s specific documentation needs, ensuring relevant and accurate content generation
2. Secure: hosted locally and offline, unparalleled data security and privacy
3. Versatile: with different access levels, utilized by employees as a documentation assistant and externally as a chatbot for customers
4. Comprehensive: all aspects of documentation covered, from internal and external documents to product guides and user support

Technical details
The system would offer three main categories of documentation assistance:

• Internal Documents: simplified creation and management of internal documents such as operating
agreements, non-disclosure agreements, employment contracts, business reports, financial records, minutes for business meetings, business plans, compliance and regulatory documents, and internal product documentation.
• External Documents: simplified external documentation process, which covers business proposals, contracts with vendors and customers, and transactional documents.
• Products: improved product documentation with AI generated user guides, instruction manuals, troubleshooting guides, SDKs, feature documentation, and FAQs.

TalkDoc’s modular architecture would allow for seamless integration into the existing systems and support for a wide range of formats, ensuring compatibility with the current documentation tools. Employees could leverage the AI assistant to draft, edit, and collaborate on documents, while the external-facing chatbot would provide real-time support to customers and users. With different access levels, TalkDoc will be a versatile tool for employees, as well as a customer-facing chatbot. TalkDoc’s AI-driven technology would empower companies to streamline their documentation processes, reducing errors, and fostering collaboration so to minimize hours spent on drafting, organizing, and managing business documentation.

Requested Support
We, Cross-ING, are looking for a research and development-friendly environment with access to resources and networking opportunities. We seek partners for the entire product development life cycle, from conceptualisation and prototyping to testing and validation. We also aim to expand the team and create strategic partnerships for long-term growth.

Smart Platform for Earth Observation Data Search & Use

Requester: University of Zurich (incl. extended consortium of industry and research)

The availability and quality of satellite data have strongly increased in recent years. However, this diversity of data also makes it difficult to maintain an overview of current developments. This applies to finding the most suitable data for specific questions, data handling (from data access, management and processing to evaluation and visualization towards interpretation), and the critical reflection on the validity and quality of the available derived information. Especially outside of research, extensive expertise is required for this, which is only available to a limited extent depending on the organization.

The idea presented here is to implement a platform that combines the following modules:

(i) an extensive database of (freely) available Earth observation data and products (e.g., from satellites, via
Copernicus or Open Data sources).

(ii) a comprehensive characterization of the data in terms of data specifications (format, etc.) and potential
data applications (i.e., labelling application areas/product properties, etc.)

iii) a harmonization of the data with respect to data format/type, geometry, metadata

iv) a flexible, self-optimizing search engine based on current developments in NLP (i.e., considering
context/semantics understanding etc)

v) intuitive and standardized interfaces for data access and sharing.

The goal is to allow people without prior knowledge or expertise in satellite missions and sensors to access and use this data. Examples would include information on pollution or natural hazard, long-term statistics based on satellite data on the effects of climate change and biodiversity decline, or visualization of land cover and land use in near real time.


  • Development of a database management system for linking a wide variety of data formats including metadata.
  • Semi-automatic processes for thematic labelling of EO data.
  • Development of business model ideas and their implementation.

Whom are we looking for?

  • Organizations/users interested in using/implementing the platform.
  • Organizations contributing to the data base.
  • Organizations/users using the data outcome.

Leveraging the power of open source to boost the usability of product data in the construction industry


The ultimate goal of the OpenMaterialData project is to enable all stakeholders in the construction industry to perform their workflows on the same product specific datasets, regardless of which software tools they use. 

This results in significant benefits that are critical to a greener and more productive construction industry. Firstly, evaluating and comparing a much broader range of materials / products becomes significantly easier through holistic filtering (quality, performance, sustainability, cost), especially for less experienced stakeholders. Secondly, transparency increases. Among other things, the impact of a material on the performance of a building can be much better visualized, innovative products can enter the market more easily and assessing the circularity index of a building becomes much simpler. Thirdly, manual labor and inaccuracies in planning are drastically reduced based on digital, user centric services, which directly addresses the shortage of experts.



The OpenMaterialData project addresses the lack of usability of product data for software tools in the construction industry – an issue that is central to both the acceleration of digitization and the sustainability of the sector. As of today, every stakeholder – from architects, engineers, sustainability experts to buyers – compiles product related data from individual sources, which not only creates a lot of manual labor but also inaccuracies and intransparency. 

Although demand from data consumers is high, existing initiatives are currently not capable of delivering the required holistic data sets. In general, these initiatives face two main problems. Firstly, proprietary databases usually charge manufacturers for onboarding their data and exposing it through their channels to potential customers. This approach has its limitations, as it is not possible for every initiative to reach an agreement with all manufacturers. Secondly, the data delivered by the manufacturers lacks quality. Since no standard to document product data has been established broadly, the data delivered by the manufacturers is often not consistent in itself and not comparable from one to another source. Mapping and validating therefore becomes quite an effort, which makes the business model of being a proprietary “data collector” difficult. As a result, most of the data that is reliably available through an API is data from public EPD databases. These data sets do not contain any further information about the mechanical and physical properties of the product – which is a critical shortcoming.  


Technically, the biggest challenge is to make unstructured data queryable. While this is a known problem in many other industries, the specific conditions in the construction industry are different. Firstly, the number of products (per country) to begin with can be relatively small. By focusing first on the materials/products that are most important to a building’s energy performance and environmental footprint, and excluding others such as fixtures, building services, and appliances, the number of products that must be considered to add immediate value is manageable. Secondly, the need for improvement is great. There is pressure both on the part of building owners, who need a better understanding of a building’s emissions to achieve their sustainability goals, and on the part of designers and contractors, who want to reduce the enormous amount of work involved in finding the most suitable materials. 



By leveraging the power of the open source movement, the OpenMaterialData project offers a fundamentally new approach. Provided with access to tools for crawling data as well as for maintaining and enriching datasets (think a google like search engine approach), an API to search and find data and additionally a platform to communicate about product but also data quality (think reddit), committed members of the community across manufacturers, owners, planners and contractors can work together to make product data available in a digital, very need driven way. 

The intended workflow is as follows: 

In the first step, data is pulled / indexed from different sources and different formats. 

In the second step, the resulting meta data needs to be validated, linked and/or enriched. 

For the third step, an API must be provided that gives different tools access to the indexed data. 

Business Case

The organization that sets up the technical infrastructure, maintains it and offers support to the community must be non-profit and independent. It can be very lean, mostly driven by volunteers and will be funded by government fundings, sponsors from the industry who a) need access to the data and b) want to support the transformation towards a more digital and sustainable industry and voluntary membership fees.

All activities focus on the goal to index as much data as possible. While the initial setup requires one-time investments, the effort will be significantly reduced once the initiative triggers a kind of pull effect comparable to SEO – this will lead to increased adoption of some standards by manufacturers and thus reduce the mapping effort soon.

Once up and running, the non-profit organization can create additional earnings e.g. by offering additional, fee required services that helps manufacturers to onboard their data or run ads on the communication platform to eliminate the need for external sponsoring. 

In macroeconomic terms, the OpenMaterialData project will be the basis for a whole range of innovative services that will fundamentally change the way information about materials and products in construction projects is collected and shared digitally. Therefore, the project is not a competition for existing initiatives but an enabler that can lift them on a new level.

Requested support

While we have broad support from potential data consumers (owners, planners, software developers), we still need to work out the technical framework.

We are looking for experts with experience in

  • search on unstructured, distributed data (small data),
  • diverse data formats and storages,
  • data enrichment,
  • and databases


Smart Mobility for Tourism

Mobility as a Service (MaaS) wird momentan nur für Städte (z.B. Zürich, Basel) aufgebaut. Allerdings
kämpfen auch ländliche touristische Regionen mit Herausforderungen im Bereich Mobilität und
diesbezüglichen Umweltbelastungen.

Die offene Frage ist, wie bei einem MaaS für touristische ländliche Regionen die notwendigen Daten erfasst
und an eine Verarbeitungszentrale weitergeleitet sowie ausgewertet und für die Nutzenden aufbereitet
werden kernen. Dies umfasst nicht nur Daten über den motorisierten Individualverkehr
(Verkehrsaufkommen, Verkehrsleitung) sondern betrifft auch den öffentlichen Verkehr (Bus & Bahn),
private Bahnbetriebe (Seilbahnen) und den nichtmotorisierten Verkehr (Velos, Trottis). Dabei ist nicht nur
die Datenerfassung eine Herausforderung, sondern auch die Anonymisierung der Daten.

Ziel wäre die Entwicklung einer Plattform oder die Integration eines neuen Services in eine bestehende
Plattform, um den Zugriff auf mobile Angebotsformen (on Demand-Verkehre, Sharing, etc.) zu bündeln und
somit den individuellen motorisierten Verkehr sukzessive zu reduzieren und zu ersetzen. Für die
Tourismusorganisationen würde dies eine zusätzliche Möglichkeit bieten, entsprechend der
Nachhaltigkeitsstrategie von Schweiz Tourismus (ST) (Swisstainable – STnet) einen autofreien Urlaub zu
gewährleisten und die jeweiligen Destinationen ausschliesslich mit dem ÖV einfach, umweltfreundlich und
komfortabel zu erschliessen.


  • Adressierung der technischen Herausforderungen einer Datenerfassung in Echtzeit, sowohl aus
    der Sensorik-Perspektive als auch in Hinblick auf das Datenmanagement.
  • Verständnis über die Herausforderungen und Lösungsansätze in Hinblick auf die datenethischen
    Gesichtspunkte (Datenschutz, Privatsphäre). Insbesondere Aufbau von Expertise im Bereich Datenanonymisierung und/oder Verwendung synthetischer Daten.
  • Entwicklung von Geschäftsmodell-Ideen und deren Umsetzung.


  • Unternehmen, die mit Mobilitätsdaten, deren Erfassung und Management, Erfahrung haben.
  • Unternehmen, die Erfahrung mit Plattformen zur Aufbereitung, Auswertung und Visualisierung von
    solchen Daten haben.
  • Organisationen, die im Bereich Tourismus und angepassten Mobilitäts-Konzepten Erfahrung
    haben oder eine Integration eines solchen Services anstreben.
  • Forschungseinrichtungen, die im Bereich Data Science für Echtzeit-Anwendungen forschen und
    einen Anwendungsbezug suchen ODER die Erfahrungen mit den ethisch-moralischen Fragestellungen und rechtlichen Rahmen aufweisen können.

ML model and dataset licensing

In many applications of machine learning (ML), foundation models are becoming the starting point for downstream analyses and tasks. For example, any text-bound task will often begin with a large language model, which has been pre-trained on vast text corpora scraped from the web. Subsequently, these models are tuned on a few annotated samples for the specific task at hand. This unsupervised pre-training ensures that the model has learned good representations, such that the concrete task can be solved with only a few annotated samples.

Recently, we have observed a trend of big tech companies limiting the licensing of foundation models to non-commercial uses. While this gatekeeping is understandable from a business perspective, it poses a significant challenge for innovative small to mid-sized companies, given that they rely heavily on the results of state-of-the-art research, which is dominated by these very companies.

At the same time, it is unclear how licensing of public datasets impacts the usability of models that used these datasets, e.g., is a model that was trained on restrictive data still subject to the same license restrictions, even if it was trained on other (more permissive) data? Given these observations, we foresee a need to clarify the impact of ML model and dataset license restrictions and the extent to which companies within the data innovation alliance are facing similar challenges and how they are addressing them.


    1. To determine the extent to which small to mid-sized companies are encountering the
      challenge of restrictive licenses on ML models and datasets.
    2. To understand the implications of combining various license texts in a given ML project,
      e.g., training a MIT licensed model on a non-commercial creative commons license (such
      as CC BY-NC-SA 4.0).
    3. To identify amendments to and/or modifications of original licensing texts after, e.g.,
      finetuning pre-trained models on internal data, making architectural changes, etc.
    4. To explore the possibility of forming a collaboration to curate data and/or pre-train models
      on public datasets, if there is sufficient interest among the participants.

The ML model and data landscape is less and less open posing grave challenges to small- and mid-size companies with limited computational resources. We consider it a timely question to address at this point and hope that by leveraging the data booster program, a future bottleneck due to restrictive licensing can be evaded.

Requested support
Companies might find themselves in a similar situation, which could pave the way to a collaborative effort in the future.
Research partners could provide the necessary legal expertise.

Call for Participation description (pdf)

Deadline: March 03, 2023

Unleashing the full potential of precision medicine – equitable access to cutting edge data for research at every scale

Requester: Psephisma Health

Even though medicine has always been personal, in a sense, it hasn’t been very personalized. The steady development of diagnostic and research technologies has made it abundantly clear that individuals differ in many aspects. It is the appreciation of molecular differences (genetic, transcriptomic and proteomic ones) that gave rise to term of “personalized” or “precision” medicine as something worth striving to. The promise of it – “The right drug for the right patient at the right time” – remains elusive still, the main reason being our lack of understanding as to what some of the molecular features really mean and which ones to observe when prescribing treatments. The key in making further progress is to study greater volumes of such molecular profiling data in the context of health and disease. That is, however, easier said than done as such data is very scarce due to the costs and efforts associated with its generation. Also, positive legal regulations in the domain of patient privacy and research on humans make the extension of existing data attributes challenging, if not impossible, which negatively impacts data reuse. With that, simply scaling up current efforts and ways of working is inefficient and unable to produce the desired outcome. We need qualitatively different approaches.

To address existing shortcomings and create a robust precision medicine framework that scales easily in the future, we have conceived an innovative health data management platform – Psephisma Health. The platform holds molecular profiling data generated in the course of clinical research directly associated with its owner – the patient. By having patients directly involved, molecular data attributes can easily be extended and data itself augmented by matching data from other sources, like regular diagnostics. For customers, external entities which require molecular real-world data for research (biotechnology and pharmaceutical companies, other), our platform functions as a virtual cohort builder and data marketplace. It allows simple access and provides powerful search tools, allowing to narrow down on the data of interest which can then be leased for research according to the available consent. The lease proceeds are automatically distributed between the patient, care-providing institution and the entity that funded the initial data generation. With the above, a whole new model of precision medicine takes shape, one which offers numerous additional opportunities for the patients to take part in and profit from the clinical research.

Requested support
1. Ethics
Our proposal assumes redistribution of existing roles between patients and physicians, some of which requires ethics-focused attention.
In general, there appear to be 3 main areas in the PH business model which need such attention:
a. New roles for patients/individuals, as they are now able to manage secondary research
uses of their protected health information (PHI) directly.
b. The implications of the transactions associated with the data lease (lease proceeds for
patients, healthcare institutions and entities that funded the data generation). What is
the most ethically appropriate way to distribute monetary proceeds associated with the data lease. What is the meaningful way to reimburse the funders of data generation when those are public institutions?
c. How to best address possible caveats stemming from the ability of patients to “learn” from their data and indirect interaction with research entities without involvement of their healthcare institution?

2. Business and financial projections
Assistance in formulating metrics expected by the potential investors – exit strategies, margins, returns.

3. Investor outreach
Assistance in setting the scope and depth of investor materials best suited for recommended/prospective funding sources.