sigmoid.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A social space for people researching, working with, or just interested in AI!

Server stats:

708
active users

#continuallearning

0 posts0 participants0 posts today

As they present their idea for producing symbolic AI domains using #LLMs, they present an interesting summary of why "Good Old-Fashioned #AI" is still totally relevant today - despite being aware of its limitations.

youtube.com/watch?v=_TrKARhF5c

I, too, rather recommend traditional #AI in my solutions, instead of blindly yielding to the current trends.

Plus it paves the way for #continuallearning to improve the skills with time.

ML algorithms need lots of data and are prone to catastrophic forgetting. We present a new method for continual few-shot learning, bringing us closer to the way humans learn: sample efficient, while maintaining long-term retention.
📜arxiv.org/abs/2301.04584

🧵 below:

arXiv.orgContinual HyperTransformer: A Meta-Learner for Continual Few-Shot LearningWe focus on the problem of learning without forgetting from multiple tasks arriving sequentially, where each task is defined using a few-shot episode of novel or already seen classes. We approach this problem using the recently published HyperTransformer (HT), a Transformer-based hypernetwork that generates specialized task-specific CNN weights directly from the support set. In order to learn from a continual sequence of tasks, we propose to recursively re-use the generated weights as input to the HT for the next task. This way, the generated CNN weights themselves act as a representation of previously learned tasks, and the HT is trained to update these weights so that the new task can be learned without forgetting past tasks. This approach is different from most continual learning algorithms that typically rely on using replay buffers, weight regularization or task-dependent architectural changes. We demonstrate that our proposed Continual HyperTransformer method equipped with a prototypical loss is capable of learning and retaining knowledge about past tasks for a variety of scenarios, including learning from mini-batches, and task-incremental and class-incremental learning scenarios.
#AI#CV#NewPaper

Hi all. First post on here. In case you don’t know me, I’ve been in for a while (trained my first networks in 1999, but they were rubbish!) now focused on applications (mainly ), but also interests in statistics, and . Currently leading a fantastic team of scientists and engineers at Amazon. Slightly sad at the demise of Twitter, hoping can fill the hole it left!