Data Marketplaces: Best Practices, Challenges, and Advancements for Embedded Finance

Data leads to knowledge, being considered crucial for running a successful business. Its production is dramatically increasing, generating challenges into identifying what data is worth extracting and what is not. Complementary to the data value, data marketplaces are becoming increasingly popular serving as the sources for additional data. They offer purchasing and selling capabilities for external data, simplifying data sourcing, while enabling users to navigate today’s complex data world. Data marketplaces are continuously developed, serving the requirements of individuals and businesses in industries such as healthcare or telecommunications, with the top ones covering individual online activities and financial information. 

FAME: Federated Decentralized Trusted Data Marketplace for Embedded Finance

Due to its multivariate and multipurpose use and reuse, data’s worth is dramatically increasing, leading to an era characterized by the generation of data marketplaces towards accessing, selling, sharing, and trading data and data assets. However, most market vendors still follow a centralized monolithic cloud model for controlling most of the market for data services. Also, this strategy is incompatible with European objectives for cloud computing and the data economy, lacking data sovereignty and cross-cloud interoperability principles. 

Comprehensive Architecture for Data Quality Assessment in Industrial IoT

The rapid growth of the Industrial Internet of Things (IIoT) has led to the generation of vast amounts of data, which has significant implications for decision-making in various industrial sectors and is increasingly being traded on data marketplaces. Quantifying IIoT data quality and providing measures to improve, it is critical for both operational efficiency and business value.

Towards a Unified Multidimensional Explainability Metric: Evaluating Trustworthiness in AI Models

In this paper, we present a comprehensive framework for assessing the explainability of various XAI methods, such as LIME and SHAP, across multiple datasets and machine learning models, with the ultimate goal of creating a unified multidimensional explainability score. Our methodology focuses on three key aspects of explainability: fidelity, simplicity, and stability. We leverage benchmarking experiments to systematically evaluate these aspects and use the insights gained to construct an offline knowledge base.