... you can claim earlier sessions.
... also FF, otherwise Daniel resuming FF on Jan11
Embeddings, ML refresher.
-------------------
Andreas Koukounas, Georgios Mastrapas, Michael Günther, Bo Wang, Scott Martens, Isabelle Mohr, Saba Sturua, Mohammad Kalim Akram, Joan Fontanals Martínez, Saahil Ognawala, Susana Guzman, Maximilian Werk, Nan Wang, Han Xiao
Contrastive Language-Image Pretraining (CLIP) is widely used to train models to align images and texts in a common embedding space by mapping them to fixed-sized vectors. These models are key to multimodal information retrieval and related tasks. However, CLIP models generally underperform in text-only tasks compared to specialized text models. This creates inefficiencies for information retrieval systems that keep separate embeddings and models for text-only and multimodal tasks. We propose a novel, multi-task contrastive training method to address this issue, which we use to train the jina-clip-v1 model to achieve the state-of-the-art performance on both text-image and text-text retrieval tasks.
[Daniel] - full stack genAI case study
[Jake] - analytics as BigQuery custom functions
[Warren] - in-line backup for Connect - https://extensions.dev/extensions/firebase/firestore-bigquery-export
[Daniel] https://cloud.google.com/firestore/docs/pitr, this could change everything.
Lee, - ESE FAIR: ...
Ara - ARPA-H... FAIR: ... https://deck.gl/
Jonas, Inês - tensorflow projector, FAIR: reassemble interactive plot from zip or json
Ines plotly: https://epiverse.github.io/tcgapath
Jeya - Vector Gazer: https://jeyabbalas.github.io/vector-gazer/ ... FAIR: ...
Daniel: ... https://observablehq.com/d/d58131e5b0a6da40 - plotly
Praful: https://cdn.jsdelivr.net/npm/plotly.js-dist/+esm")).default
Praful with versioning: https://cdn.jsdelivr.net/npm/plotly.js-dist@2.30.0/+esm")