mastodon.tetaneutral.net est l'un des nombreux serveurs Mastodon indépendants que vous pouvez utiliser pour participer au fédiverse.
Instance de Mastodon, réseau social de micro-blogging libre et décentralisé hébergée par l'association Tetaneutral.net.

Statistiques du serveur :

150
comptes actifs

#computervision

1 message1 participant0 message aujourd’hui

Both of my talks from @fosdem last weekend are now available online.

"Return To Go Without Wires" about using Go/TinyGo to make your own AirTags without any Apple hardware:

cuddly.tube/w/p/2H3BJMkJZEJRUS

"Seeing Eye to Eye: Computer Vision using wasmVision" in the first ever WebAssembly dev room at FOSDEM:

video.fosdem.org/2025/k4601/fo

Talk: “Making Victorian News Images Searchable: A Computational Approach to the Illustrated London News, 1842-1899” by Thomas Smits (University of Amsterdam).

Wednesday, 15 January, 1pm GMT, Zoom

Smits discusses his work on creating a searchable dataset of 72,081 illustrations
using multimodal embeddings and AI, exploring challenges and opportunities in computational humanities and media history.

Register: forms.office.com/e/Lii2Rd8Ep8
#DigitalHumanities #ComputerVision #MediaHistory

forms.office.comMicrosoft Forms

#AI #MachineLearning #BiasInAI #STEMSaturday #DeepLearning #ComputerVision #Robotics #ReinforcementLearning

Meet the editors of "Mitigating Bias in Machine Learning" Dr. Carlotta Berry and Dr. Brandeis Hill Marshall (Brandeis Marshall, PhD)
This practical guide shows, step by step, how to use machine learning to carry out actionable decisions that do not discriminate based on numerous human factors, including ethnicity and gender.
On Sale On Amazon a.co/d/dtMizVH

The latest issue of The Art Bulletin (Vol. 106, Issue 2, 2024) features critical essays on topics such as digital art history, computer vision, and AI in archives. Highlights include “Art History after Computer Vision” by Elizabeth Mansfield and “Digital Art History as Critical AI” by Leonardo Impett. Don’t miss this special focus on tech’s impact on art history and more.

#DigitalArtHistory #DigitalHumanities #ArtificialIntelligence #ComputerVision #ArtResearch

tandfonline.com/toc/rcab20/106

An eye is an eye is an eye is an eye

vimeo.com/1028841287

The installation An eye is an eye is an eye is an eye hijacks the images generated by computer vision ‘observing’ machines to turn them into the medium of a visual and poetic narrative, written in real time, questioning our ability to make sense of the visible, our perception of reality and our relationship to the imaginary.

Un truc qui fait plaisir à voir après 10h de calcul sur mon laptop pour l'entraînement nocturne d'un modèle de #ComputerVision c'est une matrice de confusion qui a cette tête.

En gros, quasiment pas de confusion entre les 262 classes !

Mais ça classe quoi ce truc ?

Les panneaux du code de la route détectés par @panoramax :)

Plus de 70000 photos ont été annotées semi-manuellement pour l'entraînement.

Tout ça est partagé et ouvert bien sûr: huggingface.co/Panoramax

Small advertisement for my Ph.D. thesis and code, focused on #computervision for #robotics.
Using #julialang to implement #Bayesian inference algorithms for the 6D pose estimation of known objects in depth images.
TLDR: it works even with occlusions; needs <1sec on a GPU; does not need training; future research could focus on including color images / semantic information since SOA performs much better if color images are available.
doc: publications.rwth-aachen.de/re
code: github.com/rwth-irt/BayesianPo

New research introduces "Backward Search" for Conditional Image Retrieval without needing expensive datasets! Achieves mAP@10 of 0.541 on WikiArt, aPY, and CUB datasets—outperforming existing methods. Student model runs up to 160x faster. 🚀 #ComputerVision #AI #ImageRetrieval
journals.plos.org/plosone/arti

journals.plos.orgBackward induction-based deep image searchConditional image retrieval (CIR), which involves retrieving images by a query image along with user-specified conditions, is essential in computer vision research for efficient image search and automated image analysis. The existing approaches, such as composed image retrieval (CoIR) methods, have been actively studied. However, these methods face challenges as they require either a triplet dataset or richly annotated image-text pairs, which are expensive to obtain. In this work, we demonstrate that CIR at the image-level concept can be achieved using an inverse mapping approach that explores the model’s inductive knowledge. Our proposed CIR method, called Backward Search, updates the query embedding to conform to the condition. Specifically, the embedding of the query image is updated by predicting the probability of the label and minimizing the difference from the condition label. This enables CIR with image-level concepts while preserving the context of the query. In this paper, we introduce the Backward Search method that enables single and multi-conditional image retrieval. Moreover, we efficiently reduce the computation time by distilling the knowledge. We conduct experiments using the WikiArt, aPY, and CUB benchmark datasets. The proposed method achieves an average mAP@10 of 0.541 on the datasets, demonstrating a marked improvement compared to the CoIR methods in our comparative experiments. Furthermore, by employing knowledge distillation with the Backward Search model as the teacher, the student model achieves a significant reduction in computation time, up to 160 times faster with only a slight decrease in performance. The implementation of our method is available at the following URL: https://github.com/dhlee-work/BackwardSearch.