Machine Learning

20 readers
1 users here now

This subreddit is temporarily closed in protest of Reddit killing third party apps, see /r/ModCoord and /r/Save3rdPartyApps for more information.

founded 2 years ago
MODERATORS
76
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/machinelearning by /u/moschles on 2025-06-17 12:10:04+00:00.


Causal Machine Learning

Do you work in CausalML? Have you heard of it? Do you have an opinion about it? Anything else you would like to share about CausalML?

The 140-page survey paper on CausalML.

One of the breakout books on causal inference.

77
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/machinelearning by /u/Worried-Variety3397 on 2025-06-17 01:31:39+00:00.


Honestly, the prices I have seen from data labeling vendors are just insane. The delivery timelines are way too long as well. We had a recent project with some medical data that needed pre-sales labeling. The vendor wanted us to pay them every week, but every delivery was a mess and needed countless rounds of revisions.

Later we found out the labeling company had outsourced the whole task to a group of people who clearly had no idea what they were doing. If your project is small, niche, or long-tail, the bigger vendors do not even want to take it. The smaller teams? I just cannot trust their quality.

Besides being crazy expensive, the labeling is always super subjective, especially for big, complex, or domain-specific datasets. Consistency is basically nonexistent. The turnover at these labeling companies is wild too. It feels like half their team just gets a crash course and then is thrown onto your project. I really cannot convince myself they are going to deliver anything good.

Now I am getting emails from companies claiming their "automated labeling" is faster and better than anything humans can do. I honestly have no clue if that is for real since I have never actually tried it.

Is anyone else seeing this problem? How do you all deal with the labeling part of the workflow? Is automated labeling actually any good? Has anyone tried it or had it totally flop?

Would appreciate any honest feedback. Thanks for your time.

78
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/machinelearning by /u/bawkbawkbot on 2025-06-16 12:42:32+00:00.


Hi, I'm bawkbawkbot! I'm a five year old chicken recognition bot 🐔 which was built using TensorFlow. I am open source and can be found here https://gitlab.com/Lazilox/bawkbawkbot. I've been serving the reddit community identifying their chicken breeds. I'm not an expert (I am only a chicken-bot) but the community seems happy with my performance and I often contribute to threads meaningfully!

I run on a Pi 4 and doesn’t need a GPU. People ask why I don’t use LLMs or diffusion models, but for small, focused tasks like “which chicken is this?” the old-school CV approach works.

Curious what people think — does this kind of task still make sense as a standalone model, or is there value in using multimodal LLMs even at this scale? How long before I'm obsolete?

Bawk bawk!

79
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/machinelearning by /u/Rajivrocks on 2025-06-16 14:44:11+00:00.


I believe this has been asked before on multiple occasions, but I have an example to share to get references on. I am writing my Master thesis at the moment and whilst writing I'm skipping making figures because I don't know which webapp works the best. Here is the figure I'd like to "copy" the style of

https://preview.redd.it/lqwl88m5wa7f1.png?width=1445&format=png&auto=webp&s=8287eeda6dd8151ccb177509c4d46f9cc1a0cf96

From Chen et al 2021 "TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation"

What I specifically like are the 3D representations of the down/upsampling layers in the CNN and decoder respectively.

What tools do you guys recommend that can create figures that look as visually appealing and informative as this one?

What I used to do before in my Bachelors was using lucidcharts because we had a license. Now I don't have it anymore. Now I've moved to Drawio. But I feel that I can't create these figures using that website.

What do you guys recommend and what do you guys use for your papers?

80
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/machinelearning by /u/Deep_Expression182 on 2025-06-16 07:48:07+00:00.


We’re hiring senior and principal research scientists to shape the future of generative AI at NVIDIA.

We're looking for builders with deep experience in LLMs and/or multimodal models. You’ll work on training and deploying frontier-scale models, designing next-gen model architectures, optimizing training stacks, and helping us push the frontier of AI performance.

We’re a tight-knit team with high standards, strong research instincts, and a bias for shipping.

Open roles:

What we value:

  • Deep understanding of transformer architectures, distributed training and optimization
  • Using the scientific method for conducting methodical training experiments
  • Data curation for pre-training and post-training
  • Experience working with LLMs and/or large multimodal models
  • A builder mindset — clean code, fast iterations, deep thinking

This is a rare opportunity to help shape NVIDIA’s genAI stack from the ground up. We work closely with software, optimization, deployment, and many other research teams, and have massive scale and resources behind us.

Feel free apply directly through the links.

81
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/machinelearning by /u/avd4292 on 2025-06-16 03:56:54+00:00.


Hi, we have released a new paper that studies the underlying mechanism of artifacts in attention and feature maps from Vision Transformers Need Registers, a phenomena that has also been observed in LLMs (e.g., 1, 2). We propose a training-free method to mitigate this. As one of the authors, I am creating this post to kickstart any discussion.

Paper: https://arxiv.org/abs/2506.08010

Project Page: https://avdravid.github.io/test-time-registers/

Code: https://github.com/nickjiang2378/test-time-registers/tree/main

82
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/machinelearning by /u/Fantastic-Nerve-4056 on 2025-06-16 00:28:24+00:00.


Thought of posting this to get an expert point of view (mainly Research Scientists or Profs.)

So I am a current PhD student in Machine Learning, working towards theoretical aspects of Reinforcement Learning. Additionally, I have interned at Google Deepmind and Adobe Research working towards applied aspects of AI, and here's what I had observed

Academia: We don't really have access to a lot of compute (in comparison to industry) and given my works are towards theoretical aspects, we prove things mathematicaly and then move with the experiments, having known the possible outcome. While this is a lengthy process, it indeed gives that "Research Vibe"

Industry: Here given we have a lot of compute, the work is like, you get an idea, you expect a few things intuitively, if it works great, else analyse the results, see what could have gone wrong and come up with a better approach. While I understand things are very applied here, I really don't get that "Research Vibe" and it seems more like a "Product Dev" Role.

Though I am aware that even at these orgs there are teams working on foundational aspects, but it seems to be very rare.

So I genuinely wanted to get an idea from relevant experts, both from the industry and academia, on what I am really missing. Would appreciate any inputs on it, as I have always thought of joining industry after my PhD, but that vibe seems to be missing.

83
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/machinelearning by /u/hedgehog0 on 2025-06-15 18:15:15+00:00.

84
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/machinelearning by /u/jsonathan on 2025-06-15 12:19:47+00:00.

85
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/machinelearning by /u/Striking-Warning9533 on 2025-06-14 16:31:29+00:00.


I have noticed a lot of people on Reddit people only learn pseudo science about AI from social media and is telling people how AI works in so many imaginary ways. Like they are using some words from fiction or myth and trying to explain these AI in weird ways and look down at actual AI researchers that doesn't worship their believers. And they keep using big words that aren't actually correct or even used in ML/AI community but just because it sounds cool.

And when you point out to them they instantly got insane and trying to say you are closed minded.

Has anyone else noticed this trend? Where do you think this misinformation mainly comes from, and is there any effective way to push back against it?

86
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/machinelearning by /u/Educational_Pea_5027 on 2025-06-14 14:00:55+00:00.


Hey r/MachineLearning,

I wanted to share a project I've been working on called HandFonted. It's a full-stack Python application that converts an image of handwriting into an installable font file (.ttf).

I'll post the direct links to the live demo, the GitHub repo in my first comment below.

The Machine Learning Pipeline

The core of the project is a three-stage process. The ML model is central, but its success depends heavily on the pre-processing and post-processing steps.

  • 1. Input & Segmentation:
    • A user uploads a single image containing handwritten characters.
    • The image is processed with OpenCV: converted to grayscale, adaptive thresholding is applied, and contours are detected to isolate each character into its own bounding box.
  • 2. Classification & Assignment:
    • Each isolated character image is fed into a pre-trained PyTorch (ResNet-Inception) model.
    • The model outputs a probability matrix for all characters against all possible classes (A-Z, a-z).
    • The Hungarian algorithm (linear_sum_assignment) is used to find the optimal one-to-one assignment, ensuring each character image is mapped to a unique letter.
  • 3. Vectorization & Font Generation:
    • The now-classified character images are converted from raster (pixels) to vector outlines using scikit-image.
    • The fontTools library assembles these vector glyphs into a standard .ttf file, mapping each one to its correct Unicode character.
  • Limitations: The system currently assumes input image has a clearly separated characters on a plain white background to work best.

This project was a fantastic learning experience in building a practical, end-to-end ML system. The code is fully open-source, and I'd love any feedback or questions you have about the implementation.

87
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/machinelearning by /u/som_samantray on 2025-06-14 02:58:46+00:00.


How to read ML Papers to stay aware of the most recent developments in the AI industry?

I am an average engineering grad working as a PM and like to explore concepts in depth. Research papers are a good source of information unlike news and clickbait.

I am not that expert to delve into the mathematical analysis in the paper but want to find ways to get a general gist of the paper for my knowledge.

88
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/machinelearning by /u/Chocological45 on 2025-06-13 13:13:39+00:00.


TL;DR: The paper introduces MOSAIC, a framework for collaborative learning among autonomous, agentic AI systems that operate in decentralized, dynamic environments. These agents selectively share and reuse modular knowledge (in the form of neural network masks) without requiring synchronization or centralized control.

Key innovations include:

  • Task similarity via Wasserstein embeddings and cosine similarity to guide knowledge retrieval.
  • Performance-based heuristics to decide what, when, and from whom to learn.
  • Modular composition of knowledge to build better policies.

Experiments show that MOSAIC outperforms isolated learners in speed and performance, sometimes solving tasks that isolated agents cannot. Over time, a form of emergent self-organization occurs between agents, resulting from the discovered hierarchies in the curriculum, where simpler tasks support harder ones, enhancing the collective’s efficiency and adaptability.

Overall, MOSAIC demonstrates that selective, autonomous collaboration can produce a collective intelligence that exceeds the sum of its parts.

The paper: https://arxiv.org/abs/2506.05577

The code: https://github.com/DMIU-ShELL/MOSAIC

Abstract:

Agentic AI has gained significant interest as a research paradigm focused on autonomy, self-directed learning, and long-term reliability of decision making. Real-world agentic systems operate in decentralized settings on a large set of tasks or data distributions with constraints such as limited bandwidth, asynchronous execution, and the absence of a centralized model or even common objectives. We posit that exploiting previously learned skills, task similarities, and communication capabilities in a collective of agentic AI are challenging but essential elements to enabling scalability, open-endedness, and beneficial collaborative learning dynamics. In this paper, we introduce Modular Sharing and Composition in Collective Learning (MOSAIC), an agentic algorithm that allows multiple agents to independently solve different tasks while also identifying, sharing, and reusing useful machine-learned knowledge, without coordination, synchronization, or centralized control. MOSAIC combines three mechanisms: (1) modular policy composition via neural network masks, (2) cosine similarity estimation using Wasserstein embeddings for knowledge selection, and (3) asynchronous communication and policy integration. Results on a set of RL benchmarks show that MOSAIC has a greater sample efficiency than isolated learners, i.e., it learns significantly faster, and in some cases, finds solutions to tasks that cannot be solved by isolated learners. The collaborative learning and sharing dynamics are also observed to result in the emergence of ideal curricula of tasks, from easy to hard. These findings support the case for collaborative learning in agentic systems to achieve better and continuously evolving performance both at the individual and collective levels.

High-level illustration of the main MOSAIC algorithmic steps. (A) A Wasserstein task embedding is maintained throughout learning. (B) Embeddings are shared with other agents as queries. (C) Agents respond with information regarding their knowledge. Selection occurs via similarity (D) and performance (E). (F) (G) Network masks are requested. (H) Received masks composed together for the next forward pass.

Comparison of MOSAIC against baseline approaches over 70 runs (14 tasks and five seeds/task) with 95% confidence intervals.

Ablation of MOSAIC with individual components removed from the system. MOSAIC performs best when all components work as one.

89
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/machinelearning by /u/SouvikMandal on 2025-06-12 15:41:51+00:00.


We're excited to share Nanonets-OCR-s, a powerful and lightweight (3B) VLM model that converts documents into clean, structured Markdown. This model is trained to understand document structure and content context (like tables, equations, images, plots, watermarks, checkboxes, etc.).

🔍 Key Features:

  • LaTeX Equation Recognition Converts inline and block-level math into properly formatted LaTeX, distinguishing between $...$ and $$...$$.
  • Image Descriptions for LLMs Describes embedded images using structured <img> tags. Handles logos, charts, plots, and so on.
  • Signature Detection & Isolation Finds and tags signatures in scanned documents, outputting them in <signature> blocks.
  • Watermark Extraction Extracts watermark text and stores it within <watermark> tag for traceability.
  • Smart Checkbox & Radio Button Handling Converts checkboxes to Unicode symbols like ☑, ☒, and ☐ for reliable parsing in downstream apps.
  • Complex Table Extraction Handles multi-row/column tables, preserving structure and outputting both Markdown and HTML formats.

Huggingface / GitHub / Try it out:

Huggingface Model Card

Read the full announcement

Try it with Docext in Colab

Checkboxes

Equations

Image descriptions

Signature

Tables

Watermark

90
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/machinelearning by /u/ptarlye on 2025-06-13 20:52:39+00:00.


About a year ago, I watched this 3Blue1Brown LLM tutorial on how a model’s self-attention mechanism is used to predict the next token in a sequence, and I was surprised by how little we know about what actually happens when processing the sentence "A fluffy blue creature roamed the verdant forest."

A year later, the field of mechanistic interpretability has seen significant advancements, and we're now able to "decompose" models into interpretable circuits that help explain how LLMs produce predictions. Using the second iteration of an LLM "debugger" I've been working on, I compare the hypothetical representations used in the tutorial to the actual representations I see when extracting a circuit that describes the processing of this specific sentence. If you're into model interpretability, please take a look! https://peterlai.github.io/gpt-circuits/

91
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/machinelearning by /u/penguiny1205 on 2025-06-13 04:08:44+00:00.


During one of my experiments, I reduced the latent dimension of my autoencoder to 1, which yielded surprisingly good reconstructions of the input data. (See example below)

Reconstruction (blue) of input data (orange) with dim(Z) = 1

I was surprised by this. The first suspicion was that the autoencoder had entered one of its failure modes: ie, it was indexing data and "memorizing" it somehow. But a quick sweep across the latent space reveals that the singular latent parameter was capturing features in the data in a smooth and meaningful way. (See gif below) I thought this was a somewhat interesting observation!

Reconstructed data with latent parameter z taking values from -10 to 4. The real/encoded values of z have mean = -0.59 and std = 0.30.

92
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/machinelearning by /u/No-Discipline-2354 on 2025-06-11 14:46:17+00:00.


I am working on a geospatial ML problem. It is a binary classification problem where each data sample (a geometric point location) has about 30 different features that describe the various land topography (slope, elevation, etc).

Upon doing literature surveys I found out that a lot of other research in this domain, take their observed data points and randomly train - test split those points (as in every other ML problem). But this approach assumes independence between each and every data sample in my dataset. With geospatial problems, a niche but big issue comes into the picture is spatial autocorrelation, which states that points closer to each other geometrically are more likely to have similar characteristics than points further apart.

Also a lot of research also mention that the model they have used may only work well in their regions and there is not guarantee as to how well it will adapt to new regions. Hence the motive of my work is to essentially provide a method or prove that a model has good generalization capacity.

Thus other research, simply using ML models, randomly train test splitting, can come across the issue where the train and test data samples might be near by each other, i.e having extremely high spatial correlation. So as per my understanding, this would mean that it is difficult to actually know whether the models are generalising or rather are just memorising cause there is not a lot of variety in the test and training locations.

So the approach I have taken is to divide the train and test split sub-region wise across my entire region. I have divided my region into 5 sub-regions and essentially performing cross validation where I am giving each of the 5 regions as the test region one by one. Then I am averaging the results of each 'fold-region' and using that as a final evaluation metric in order to understand if my model is actually learning anything or not.

My theory is that, showing a model that can generalise across different types of region can act as evidence to show its generalisation capacity and that it is not memorising. After this I pick the best model, and then retrain it on all the datapoints ( the entire region) and now I can show that it has generalised region wise based on my region-wise-fold metrics.

I just want a second opinion of sorts to understand whether any of this actually makes sense. Along with that I want to know if there is something that I should be working on so as to give my work proper evidence for my methods.

If anyone requires further elaboration do let me know :}

93
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/machinelearning by /u/AccomplishedCode4689 on 2025-06-12 13:43:22+00:00.


We introduce ABBA, a new architecture for Parameter-Efficient Fine-Tuning (PEFT) that significantly outperforms LoRA and all its major variants across a broad range of benchmarks, all under the same parameter budget.

Most PEFT methods, including LoRA, represent weight updates using a low-rank decomposition added to the frozen model weights. While effective, this structure can limit the expressivity of the update, especially at low rank.

ABBA takes a fundamentally different approach:

ABBA Architecture

  • Reparameterizes the update as a Hadamard product of two independently learned low-rank matrices
  • Decouples the two components of the update from the base model, allowing them to be optimized freely
  • Enables significantly higher expressivity and improved performance under the same parameter budget

📈 Empirical Results

ABBA consistently beats state-of-the-art LoRA-based methods like HiRA, DoRA, and LoRA-Pro across four open-source LLMs: Mistral-7B, Gemma-2 9B, LLaMA-3.2 1B, and LLaMA-3.2 3B, on a suite of commonsense and arithmetic reasoning benchmarks. In several cases, ABBA even outperforms full fine-tuning.

📄 Paper: https://arxiv.org/abs/2505.14238

💻 Code: https://github.com/CERT-Lab/abba

We’d love to hear your thoughts, whether you're working on PEFT methods, fine-tuning, or anything related to making LLMs more adaptable and efficient. We're happy to answer questions, discuss implementation details, or just hear how this fits into your work.

94
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/machinelearning by /u/tanishqkumar07 on 2025-06-12 12:33:31+00:00.


Hey friends, the world needs more serious AI researchers. Many AI/LLM beginners mentioned to me that they learn better from implementations than from papers/math, but existing open-source examples rarely go beyond basic nanoGPT-level demos.

To help bridge the gap, I spent the last two months full-time reimplementing and open-sourcing a self-contained implementation of most modern deep learning techniques from scratch. The result is beyond-nanoGPT, containing 20k+ lines of handcrafted, minimal, and extensively annotated PyTorch code for your educational pleasure.

It contains a clean, working implementation + demo of everything from KV caching to linear attention to diffusion Transformers to AlphaZero to even a minimal coding agent that can make end-to-end PRs autonomously.

I'd love feedback on how to make it more helpful for people interested in transitioning into deep learning research. I will continue to add features and maintain the repo for the foreseeable future. The roaring 2020s are a surreal time to be alive, and we need all hands on deck.

95
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/machinelearning by /u/xiikjuy on 2025-06-12 12:29:44+00:00.


Before the LLMs era, it seems it could be useful or justifiable to apply GNNs/GCNs to domains like molecular science, social network analyasis etc. but now... everything is LLMs-based approaches. Are these approaches still promising at all?

96
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/machinelearning by /u/Long-Sleep-13 on 2025-06-12 12:25:53+00:00.


Hey everyone,

Following up on our initial announcement, we're excited to launch a major update for SWE-rebench, the continuously updated benchmark for software engineering LLMs.

Thanks to valuable community's feedback, we've added several new features:

  • Tool Usage Support: Agents can now interact with the environment using both text-based and tool-based approaches. You can filter the leaderboard to see results for each type.
  • New Frontier Models: We've evaluated the latest models such as Claude Sonnet 3.5/4 and OpenAI o3. We're working on adding more, like Gemini 2.5 Pro, and we'd love to hear your suggestions for other models to include.
  • Fresh May Problems: We've mined a new set of problems from May 2025 and evaluated all current models against them.

Check out the updated leaderboard here: https://swe-rebench.com/leaderboard

We welcome your feedback!

97
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/machinelearning by /u/joacojoaco on 2025-06-12 00:04:00+00:00.


Okay, I just had one of those classic shower thoughts and I’m struggling to even put it into words well enough to Google it — so here I am.

Imagine this:

You have Dataset A, which contains different kinds of cells, all going through various labeled stages of mitosis.

Then you have Dataset B, which contains only one kind of cell, and only in phase 1 of mitosis.

Now, suppose you train a VAE using both datasets together. Ideally, the latent space would organize itself into clusters — different types of cells, in different phases.

Here’s the idea: Could you somehow compute the “difference” in latent space between phase 1 and phase 2 for the same cell type from Dataset A? Like a “phase change direction vector”. Then, apply that vector to the B cell cluster in phase 1, and use the decoder to generate what the B cell in phase 2 might look like.

Would that work?

A bunch of questions are bouncing around in my head: • Does this even make sense? • Is this worth trying? • Has someone already done something like this? • Since VAEs encode into a probabilistic latent space, what would be the mathematically sound way to define this kind of “direction” or “movement”? Is it something like vector arithmetic in the mean of the latent distributions? Or is that too naive?

I feel like I’m either stumbling toward something or completely misunderstanding how VAEs and biological processes work. Any thoughts, hints, papers, keywords, or reality checks would be super appreciated

98
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/machinelearning by /u/stalin1891 on 2025-06-11 19:31:27+00:00.


Are there any state-of-the-art VLMs which excel at spatial reasoning in images? For e.g., explaining the relationship of a given object with respect to other objects in the scene. I have tried VLMs like LLaVA, they give satisfactory responses, however, it is hard to refer to a specific instance of an object when multiple such instances are present in the image (e.g., two chairs).

99
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/machinelearning by /u/NumberGenerator on 2025-06-11 11:01:45+00:00.


I am a researcher in a small group and would appreciate a second perspective on my situation.

My typical workload involves 1-2 independent projects at a time, with the goal of publishing in top-tier conferences. Collaboration within my group is non-existent; my main interaction is a monthly meeting with my supervisor for general updates. Before deadlines, my supervisor might provide minor grammatical/styilistic edits, but the core idea, research, and writing are done independently. Alongside my research, I also have other responsibilities that do not contribute to my research output like grant applications and student supervision.

I am concerned that my research output might be significantly lower than researchers in larger, more collaborative groups. So I am wondering if publishing single-author papers would be a good strategy to explain my research output. What are your thoughts on this? Would single-author papers be perceived positively?

100
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/machinelearning by /u/Kingandpawnendgame on 2025-06-11 03:03:39+00:00.


We introduce FlashDMoE, the first system to completely fuse the Distributed MoE forward pass into a single kernel—delivering up to 9x higher GPU utilization, 6x lower latency, and 4x improved weak-scaling efficiency.

Code: https://github.com/osayamenja/Kleos/blob/main/csrc/include/kleos/moe/README.MD

Paper: https://arxiv.org/abs/2506.04667

If you are a CUDA enthusiast, you would enjoy reading the code :) We write the fused layer from scratch in pure CUDA.

view more: ‹ prev next ›