sigmoid.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A social space for people researching, working with, or just interested in AI!

Server stats:

605
active users

#HistoryOfAI

0 posts0 participants0 posts today
Hacker News<p>The average chess players of Bletchley Park and AI research in Britain</p><p><a href="https://blogs.bl.uk/science/2025/06/the-average-chess-players-of-bletchley-park-and-ai-research-in-britain.html" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">blogs.bl.uk/science/2025/06/th</span><span class="invisible">e-average-chess-players-of-bletchley-park-and-ai-research-in-britain.html</span></a></p><p><a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/averagechessplayers" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>averagechessplayers</span></a> <a href="https://mastodon.social/tags/BletchleyPark" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>BletchleyPark</span></a> <a href="https://mastodon.social/tags/AIresearch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIresearch</span></a> <a href="https://mastodon.social/tags/Britain" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Britain</span></a> <a href="https://mastodon.social/tags/historyofAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>historyofAI</span></a> <a href="https://mastodon.social/tags/chessandtech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>chessandtech</span></a></p>
Harald Klinke<p>Just revisited The Quest for Artificial Intelligence by Nils J. Nilsson — a brilliant walk through the history, breakthroughs &amp; ethical crossroads of AI.<br>A must-read if you’re building or governing AI today.<br><a href="https://ai.stanford.edu/~nilsson/QAI/qai.pdf" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">ai.stanford.edu/~nilsson/QAI/q</span><span class="invisible">ai.pdf</span></a></p><p><a href="https://det.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://det.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a> <a href="https://det.social/tags/AIethics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIethics</span></a> <a href="https://det.social/tags/HistoryOfAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HistoryOfAI</span></a></p>
Harald Sack<p>Building on the 90s, statistical n-gram language models, trained on vast text collections, became the backbone of NLP research. They fueled advancements in nearly all NLP techniques of the era, laying the groundwork for today&#39;s AI. </p><p>F. Jelinek (1997), Statistical Methods for Speech Recognition, MIT Press, Cambridge, MA</p><p><a href="https://sigmoid.social/tags/NLP" class="mention hashtag" rel="tag">#<span>NLP</span></a> <a href="https://sigmoid.social/tags/LanguageModels" class="mention hashtag" rel="tag">#<span>LanguageModels</span></a> <a href="https://sigmoid.social/tags/HistoryOfAI" class="mention hashtag" rel="tag">#<span>HistoryOfAI</span></a> <a href="https://sigmoid.social/tags/TextProcessing" class="mention hashtag" rel="tag">#<span>TextProcessing</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="tag">#<span>AI</span></a> <a href="https://sigmoid.social/tags/historyofscience" class="mention hashtag" rel="tag">#<span>historyofscience</span></a> <a href="https://sigmoid.social/tags/ISE2025" class="mention hashtag" rel="tag">#<span>ISE2025</span></a> <span class="h-card" translate="no"><a href="https://sigmoid.social/@fizise" class="u-url mention">@<span>fizise</span></a></span> <span class="h-card" translate="no"><a href="https://wisskomm.social/@fiz_karlsruhe" class="u-url mention">@<span>fiz_karlsruhe</span></a></span> <span class="h-card" translate="no"><a href="https://fedihum.org/@tabea" class="u-url mention">@<span>tabea</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@enorouzi" class="u-url mention">@<span>enorouzi</span></a></span> <span class="h-card" translate="no"><a href="https://fedihum.org/@sourisnumerique" class="u-url mention">@<span>sourisnumerique</span></a></span></p>
Harald Sack<p>Next stop in our NLP timeline are the (mostly) futile tries of machine translation during the cold war era. The rule-based machine translation approach was used mostly in the creation of dictionaries and grammar programs. It’s major drawback was that absolutely everything had to be made explicit.</p><p><a href="https://sigmoid.social/tags/nlp" class="mention hashtag" rel="tag">#<span>nlp</span></a> <a href="https://sigmoid.social/tags/historyofscience" class="mention hashtag" rel="tag">#<span>historyofscience</span></a> <a href="https://sigmoid.social/tags/ise2025" class="mention hashtag" rel="tag">#<span>ise2025</span></a> <a href="https://sigmoid.social/tags/lecture" class="mention hashtag" rel="tag">#<span>lecture</span></a> <a href="https://sigmoid.social/tags/machinetranslation" class="mention hashtag" rel="tag">#<span>machinetranslation</span></a> <a href="https://sigmoid.social/tags/coldwar" class="mention hashtag" rel="tag">#<span>coldwar</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="tag">#<span>AI</span></a> <a href="https://sigmoid.social/tags/historyofAI" class="mention hashtag" rel="tag">#<span>historyofAI</span></a> <span class="h-card" translate="no"><a href="https://fedihum.org/@tabea" class="u-url mention">@<span>tabea</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@enorouzi" class="u-url mention">@<span>enorouzi</span></a></span> <span class="h-card" translate="no"><a href="https://fedihum.org/@sourisnumerique" class="u-url mention">@<span>sourisnumerique</span></a></span> <span class="h-card" translate="no"><a href="https://wisskomm.social/@fiz_karlsruhe" class="u-url mention">@<span>fiz_karlsruhe</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@fizise" class="u-url mention">@<span>fizise</span></a></span></p>
guIA - guía a la IA<p><a href="https://www.youtube.com/watch?v=uEztHu4NHrs" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://www.</span><span class="">youtube.com/watch?v=uEztHu4NHrs</span><span class="invisible"></span></a></p><p><a href="https://sigmoid.social/tags/historyOfAI" class="mention hashtag" rel="tag">#<span>historyOfAI</span></a></p>
Harald Sack<p>Summarizing our very brief <a href="https://sigmoid.social/tags/HistoryOfAI" class="mention hashtag" rel="tag">#<span>HistoryOfAI</span></a> which was published here for several weeks in a series of toots , let&#39;s have a look at the popularity dynamics of symbolic vs subsymbolic AI put into perspective with historical AI hay-days and winters via the Google ngram viewer. <br /><a href="https://books.google.com/ngrams/graph?content=ontology%2Cneural+network%2Cmachine+learning%2Cexpert+system&amp;year_start=1955&amp;year_end=2022&amp;corpus=en&amp;smoothing=3&amp;case_insensitive=false" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">books.google.com/ngrams/graph?</span><span class="invisible">content=ontology%2Cneural+network%2Cmachine+learning%2Cexpert+system&amp;year_start=1955&amp;year_end=2022&amp;corpus=en&amp;smoothing=3&amp;case_insensitive=false</span></a></p><p><a href="https://sigmoid.social/tags/ISE2024" class="mention hashtag" rel="tag">#<span>ISE2024</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="tag">#<span>AI</span></a> <a href="https://sigmoid.social/tags/ontologies" class="mention hashtag" rel="tag">#<span>ontologies</span></a> <a href="https://sigmoid.social/tags/machinelearning" class="mention hashtag" rel="tag">#<span>machinelearning</span></a> <a href="https://sigmoid.social/tags/neuralnetworks" class="mention hashtag" rel="tag">#<span>neuralnetworks</span></a> <a href="https://sigmoid.social/tags/llms" class="mention hashtag" rel="tag">#<span>llms</span></a> <span class="h-card" translate="no"><a href="https://sigmoid.social/@fizise" class="u-url mention">@<span>fizise</span></a></span> <span class="h-card" translate="no"><a href="https://fedihum.org/@sourisnumerique" class="u-url mention">@<span>sourisnumerique</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@enorouzi" class="u-url mention">@<span>enorouzi</span></a></span> <a href="https://sigmoid.social/tags/semanticweb" class="mention hashtag" rel="tag">#<span>semanticweb</span></a> <a href="https://sigmoid.social/tags/knowledgegraphs" class="mention hashtag" rel="tag">#<span>knowledgegraphs</span></a></p>
Harald Sack<p>In 2022 with the advent of ChatGPT, large language models and AI in general gained an unprecedented popularity. It combined InstructGPT, a GPT-3 model complemented and fine-tuned with reinforcement learning feedback, Codex text2code, plus a massive engineering effort.</p><p>N. Lambert, et al. (2022). Illustrating Reinforcement Learning from Human Feedback (RLHF). <a href="https://huggingface.co/blog/rlhf" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="">huggingface.co/blog/rlhf</span><span class="invisible"></span></a></p><p><a href="https://sigmoid.social/tags/HistoryOfAI" class="mention hashtag" rel="tag">#<span>HistoryOfAI</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="tag">#<span>AI</span></a> <a href="https://sigmoid.social/tags/ISE2024" class="mention hashtag" rel="tag">#<span>ISE2024</span></a> <span class="h-card" translate="no"><a href="https://sigmoid.social/@fizise" class="u-url mention">@<span>fizise</span></a></span> <span class="h-card" translate="no"><a href="https://fedihum.org/@sourisnumerique" class="u-url mention">@<span>sourisnumerique</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@enorouzi" class="u-url mention">@<span>enorouzi</span></a></span> <a href="https://sigmoid.social/tags/llm" class="mention hashtag" rel="tag">#<span>llm</span></a> <a href="https://sigmoid.social/tags/gpt" class="mention hashtag" rel="tag">#<span>gpt</span></a> <a href="https://sigmoid.social/tags/llms" class="mention hashtag" rel="tag">#<span>llms</span></a></p>
Harald Sack<p>Higher, faster, farther... in 2021 Generative AI gains momentum with the advent of DaLL-E, a GPT-3 based zero-shot text2image model, and other major milestones, as e.g., GitHub CoPilot, Open AI Codex, WebGPT, and Google LaMDA.</p><p>Codex: Chen, M., et al. (2021). Evaluating Large Language Models Trained on Code, <a href="https://arxiv.org/abs/2107.03374" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="">arxiv.org/abs/2107.03374</span><span class="invisible"></span></a> <br />DaLL-E: Ramesh, A.et al. (2021). Zero-Shot Text-to-Image Generation, <a href="https://arxiv.org/abs/2107.03374" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="">arxiv.org/abs/2107.03374</span><span class="invisible"></span></a></p><p><a href="https://sigmoid.social/tags/HistoryOfAI" class="mention hashtag" rel="tag">#<span>HistoryOfAI</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="tag">#<span>AI</span></a> <a href="https://sigmoid.social/tags/ISE2024" class="mention hashtag" rel="tag">#<span>ISE2024</span></a> <span class="h-card" translate="no"><a href="https://sigmoid.social/@fizise" class="u-url mention">@<span>fizise</span></a></span> <span class="h-card" translate="no"><a href="https://fedihum.org/@sourisnumerique" class="u-url mention">@<span>sourisnumerique</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@enorouzi" class="u-url mention">@<span>enorouzi</span></a></span> <a href="https://sigmoid.social/tags/llm" class="mention hashtag" rel="tag">#<span>llm</span></a> <a href="https://sigmoid.social/tags/gpt" class="mention hashtag" rel="tag">#<span>gpt</span></a></p>
Harald Sack<p>In 2020, GPT-3 was released by OpenAI, based on 45TB data crawled from the web. A “data quality” predictor was trained to boil down the training data to 550GB “high quality” data. Learning from the prompt (few-shot learning) was also introduced.</p><p>T. B. Brown et al. (2020). Language models are few-shot learners. NIPS 2020, pp.1877–1901. <a href="https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">proceedings.neurips.cc/paper/2</span><span class="invisible">020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf</span></a></p><p><a href="https://sigmoid.social/tags/HistoryOfAI" class="mention hashtag" rel="tag">#<span>HistoryOfAI</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="tag">#<span>AI</span></a> <a href="https://sigmoid.social/tags/ISE2024" class="mention hashtag" rel="tag">#<span>ISE2024</span></a> <a href="https://sigmoid.social/tags/llms" class="mention hashtag" rel="tag">#<span>llms</span></a> <a href="https://sigmoid.social/tags/gpt" class="mention hashtag" rel="tag">#<span>gpt</span></a> <a href="https://sigmoid.social/tags/lecture" class="mention hashtag" rel="tag">#<span>lecture</span></a> <span class="h-card" translate="no"><a href="https://sigmoid.social/@enorouzi" class="u-url mention">@<span>enorouzi</span></a></span> <span class="h-card" translate="no"><a href="https://fedihum.org/@sourisnumerique" class="u-url mention">@<span>sourisnumerique</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@fizise" class="u-url mention">@<span>fizise</span></a></span></p>
Harald Sack<p>In 2019, OpenAI released GPT-2 as a direct scale-up of GPT, comprising 1.5B parameters and trained on 8M web pages.</p><p>Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., &amp; Sutskever, I. (2019). Language Models are Unsupervised Multitask Learners.<br /><a href="https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">d4mucfpksywv.cloudfront.net/be</span><span class="invisible">tter-language-models/language-models.pdf</span></a><br />OpenAI blog post: <a href="https://openai.com/index/better-language-models/" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">openai.com/index/better-langua</span><span class="invisible">ge-models/</span></a><br />GPT-2 on HuggingFace: <a href="https://huggingface.co/openai-community/gpt2" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">huggingface.co/openai-communit</span><span class="invisible">y/gpt2</span></a></p><p><a href="https://sigmoid.social/tags/HistoryOfAI" class="mention hashtag" rel="tag">#<span>HistoryOfAI</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="tag">#<span>AI</span></a> <a href="https://sigmoid.social/tags/llm" class="mention hashtag" rel="tag">#<span>llm</span></a> <a href="https://sigmoid.social/tags/ISE2024" class="mention hashtag" rel="tag">#<span>ISE2024</span></a> <span class="h-card" translate="no"><a href="https://sigmoid.social/@fizise" class="u-url mention">@<span>fizise</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@enorouzi" class="u-url mention">@<span>enorouzi</span></a></span> <span class="h-card" translate="no"><a href="https://fedihum.org/@sourisnumerique" class="u-url mention">@<span>sourisnumerique</span></a></span> <a href="https://sigmoid.social/tags/gpt" class="mention hashtag" rel="tag">#<span>gpt</span></a></p>
Harald Sack<p>In 2018, Generative Pre-trained Transformers (GPT, by OpenAI) and Bidirectional Encoder Representations from Transformers (BERT, by Google) are introduced.</p><p>Radford, A. et al (2018). Improving language understanding by generative pre-training, <a href="https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">s3-us-west-2.amazonaws.com/ope</span><span class="invisible">nai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf</span></a></p><p>J. Devlin et al (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, ACL 2019, <a href="https://aclanthology.org/N19-1423" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="">aclanthology.org/N19-1423</span><span class="invisible"></span></a></p><p><a href="https://sigmoid.social/tags/HistoryOfAI" class="mention hashtag" rel="tag">#<span>HistoryOfAI</span></a> <a href="https://sigmoid.social/tags/ISE2024" class="mention hashtag" rel="tag">#<span>ISE2024</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="tag">#<span>AI</span></a> <a href="https://sigmoid.social/tags/llm" class="mention hashtag" rel="tag">#<span>llm</span></a> <span class="h-card" translate="no"><a href="https://sigmoid.social/@fizise" class="u-url mention">@<span>fizise</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@enorouzi" class="u-url mention">@<span>enorouzi</span></a></span> <span class="h-card" translate="no"><a href="https://fedihum.org/@sourisnumerique" class="u-url mention">@<span>sourisnumerique</span></a></span></p>
Harald Sack<p>In 2014 Attention mechanisms were introduced by Bahdanau, Cho, and Bengio, which allow models to selectively focus on specific parts of the input. In 2017, the Transformer model introduced by Ashish Vaswani et al. followed, which learns to encode and decode sequential information especially effective for tasks like machine translation and <a href="https://sigmoid.social/tags/NLP" class="mention hashtag" rel="tag">#<span>NLP</span></a>.</p><p>Attention: <a href="https://arxiv.org/pdf/1409.0473" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="">arxiv.org/pdf/1409.0473</span><span class="invisible"></span></a><br />Transformers: <a href="https://arxiv.org/pdf/1706.03762" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="">arxiv.org/pdf/1706.03762</span><span class="invisible"></span></a></p><p><a href="https://sigmoid.social/tags/HistoryOfAI" class="mention hashtag" rel="tag">#<span>HistoryOfAI</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="tag">#<span>AI</span></a> <a href="https://sigmoid.social/tags/ISE2024" class="mention hashtag" rel="tag">#<span>ISE2024</span></a> <span class="h-card" translate="no"><a href="https://sigmoid.social/@fizise" class="u-url mention">@<span>fizise</span></a></span> <span class="h-card" translate="no"><a href="https://fedihum.org/@sourisnumerique" class="u-url mention">@<span>sourisnumerique</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@enorouzi" class="u-url mention">@<span>enorouzi</span></a></span> <a href="https://sigmoid.social/tags/transformers" class="mention hashtag" rel="tag">#<span>transformers</span></a></p>
Harald Sack<p>In 2013, Mikolov et al. (from Google) published word2vec, a neural network based framework to learn distributed representations of words as dense vectors in continuous space, aka word embeddings.</p><p>T. Mikolov et al. (2013). Efficient Estimation of Word Representations in Vector Space. arXiv:1301.3781<br /><a href="https://arxiv.org/abs/1301.3781" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="">arxiv.org/abs/1301.3781</span><span class="invisible"></span></a></p><p><a href="https://sigmoid.social/tags/HistoryOfAI" class="mention hashtag" rel="tag">#<span>HistoryOfAI</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="tag">#<span>AI</span></a> <a href="https://sigmoid.social/tags/ise2024" class="mention hashtag" rel="tag">#<span>ise2024</span></a> <a href="https://sigmoid.social/tags/lecture" class="mention hashtag" rel="tag">#<span>lecture</span></a> <a href="https://sigmoid.social/tags/distributionalsemantics" class="mention hashtag" rel="tag">#<span>distributionalsemantics</span></a> <a href="https://sigmoid.social/tags/wordembeddings" class="mention hashtag" rel="tag">#<span>wordembeddings</span></a> <a href="https://sigmoid.social/tags/embeddings" class="mention hashtag" rel="tag">#<span>embeddings</span></a> <span class="h-card" translate="no"><a href="https://fedihum.org/@sourisnumerique" class="u-url mention">@<span>sourisnumerique</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@enorouzi" class="u-url mention">@<span>enorouzi</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@fizise" class="u-url mention">@<span>fizise</span></a></span></p>
Harald Sack<p>In 1999, NVIDIA introduced the first Graphical Processing Unit (GPU) card Nvidia Geforce 256, enabling an unprecedented speedup for parallel computations as required for machine learning. This innovation paved the way for the rapid advancement of deep learning algorithms.</p><p>John Peddie, Famous Graphics Chips: Nvidia’s GeForce 256, IEEE Computer Society.<br /><a href="https://www.computer.org/publications/tech-news/chasing-pixels/nvidias-geforce-256" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://www.</span><span class="ellipsis">computer.org/publications/tech</span><span class="invisible">-news/chasing-pixels/nvidias-geforce-256</span></a></p><p><a href="https://sigmoid.social/tags/HistoryOfAI" class="mention hashtag" rel="tag">#<span>HistoryOfAI</span></a> <a href="https://sigmoid.social/tags/ISE2024" class="mention hashtag" rel="tag">#<span>ISE2024</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="tag">#<span>AI</span></a> <a href="https://sigmoid.social/tags/deeplearning" class="mention hashtag" rel="tag">#<span>deeplearning</span></a> <a href="https://sigmoid.social/tags/machinelearning" class="mention hashtag" rel="tag">#<span>machinelearning</span></a> <a href="https://sigmoid.social/tags/lecture" class="mention hashtag" rel="tag">#<span>lecture</span></a> <span class="h-card" translate="no"><a href="https://fedihum.org/@sourisnumerique" class="u-url mention">@<span>sourisnumerique</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@enorouzi" class="u-url mention">@<span>enorouzi</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@fizise" class="u-url mention">@<span>fizise</span></a></span> <span class="h-card" translate="no"><a href="https://wisskomm.social/@fiz_karlsruhe" class="u-url mention">@<span>fiz_karlsruhe</span></a></span></p>
Harald Sack<p>In 1996, Long Short-Term Memory (LSTM) Recurrent Neural Networks are introduced by Sepp Hochreiter and Jürgen Schmidhuber, which efficiently enabled <a href="https://sigmoid.social/tags/neuralnetworks" class="mention hashtag" rel="tag">#<span>neuralnetworks</span></a> to process sequences of data (instead of single data points) being able to learn from data and to generate text.</p><p>Hochreiter, Sepp; Schmidhuber, Juergen (1996). LSTM can solve hard long time lag problems. Advances in NIPS, pp. 473–479.<br /><a href="https://dl.acm.org/doi/10.5555/2998981.2999048" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">dl.acm.org/doi/10.5555/2998981</span><span class="invisible">.2999048</span></a></p><p><a href="https://sigmoid.social/tags/HistoryOfAI" class="mention hashtag" rel="tag">#<span>HistoryOfAI</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="tag">#<span>AI</span></a> <a href="https://sigmoid.social/tags/ISE2024" class="mention hashtag" rel="tag">#<span>ISE2024</span></a> <a href="https://sigmoid.social/tags/lecture" class="mention hashtag" rel="tag">#<span>lecture</span></a> <span class="h-card" translate="no"><a href="https://fedihum.org/@sourisnumerique" class="u-url mention">@<span>sourisnumerique</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@enorouzi" class="u-url mention">@<span>enorouzi</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@fizise" class="u-url mention">@<span>fizise</span></a></span></p>
Harald Sack<p>In 1994, Tim Berners-Lee introduced the <a href="https://sigmoid.social/tags/SemanticWeb" class="mention hashtag" rel="tag">#<span>SemanticWeb</span></a> in his plenary presentation at the 1st WWW conference in Geneva, Switzerland.</p><p>“I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A Semantic Web, which makes this possible, has yet to emerge...&quot;</p><p>Slides from TBL, 1994: <a href="https://www.w3.org/Talks/WWW94Tim/" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://www.</span><span class="">w3.org/Talks/WWW94Tim/</span><span class="invisible"></span></a></p><p><a href="https://sigmoid.social/tags/HistoryOfAI" class="mention hashtag" rel="tag">#<span>HistoryOfAI</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="tag">#<span>AI</span></a> <a href="https://sigmoid.social/tags/knowledgegraphs" class="mention hashtag" rel="tag">#<span>knowledgegraphs</span></a> <a href="https://sigmoid.social/tags/lexture" class="mention hashtag" rel="tag">#<span>lexture</span></a> <a href="https://sigmoid.social/tags/ISE2024" class="mention hashtag" rel="tag">#<span>ISE2024</span></a> <span class="h-card" translate="no"><a href="https://fedihum.org/@sourisnumerique" class="u-url mention">@<span>sourisnumerique</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@enorouzi" class="u-url mention">@<span>enorouzi</span></a></span></p>
Harald Sack<p>In 1968, Terry Winograd introduced SHRDLU, a natural language understanding agent that was able to plan and execute directives in rudimentary &#39;block world&#39;. In particular, SHRDLU emphasized the importance of user-friendly interfaces for HCI.</p><p>T. Winograd (1970). Procedures as a Representation for Data in a Computer Program for Understanding Natural Language&quot;, MIT AI Technical Report 235. <a href="https://web.archive.org/web/20201003212106/http://dspace.mit.edu/bitstream/handle/1721.1/7095/AITR-235.pdf" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">web.archive.org/web/2020100321</span><span class="invisible">2106/http://dspace.mit.edu/bitstream/handle/1721.1/7095/AITR-235.pdf</span></a></p><p><a href="https://sigmoid.social/tags/HistoryOfAI" class="mention hashtag" rel="tag">#<span>HistoryOfAI</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="tag">#<span>AI</span></a> <a href="https://sigmoid.social/tags/lecture" class="mention hashtag" rel="tag">#<span>lecture</span></a> <a href="https://sigmoid.social/tags/ISE2024" class="mention hashtag" rel="tag">#<span>ISE2024</span></a> <span class="h-card" translate="no"><a href="https://wisskomm.social/@fiz_karlsruhe" class="u-url mention">@<span>fiz_karlsruhe</span></a></span> <span class="h-card" translate="no"><a href="https://fedihum.org/@sourisnumerique" class="u-url mention">@<span>sourisnumerique</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@enorouzi" class="u-url mention">@<span>enorouzi</span></a></span></p>
Harald Sack<p>In 1965, <br />Dendral, one of the first expert systems, was introduced by Edward Feigenbaum, Joshua Lederberg, and Carl Djerassi. It was was supposed to help organic chemists in identifying unknown organic molecules, by analyzing their mass spectra and using a chemistry knowledge base.</p><p><a href="http://web.mit.edu/6.034/www/6.s966/dendral-history.pdf" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">http://</span><span class="ellipsis">web.mit.edu/6.034/www/6.s966/d</span><span class="invisible">endral-history.pdf</span></a></p><p><a href="https://sigmoid.social/tags/HistoryOfAI" class="mention hashtag" rel="tag">#<span>HistoryOfAI</span></a> <a href="https://sigmoid.social/tags/ISE2024" class="mention hashtag" rel="tag">#<span>ISE2024</span></a> <a href="https://sigmoid.social/tags/lecture" class="mention hashtag" rel="tag">#<span>lecture</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="tag">#<span>AI</span></a> <a href="https://sigmoid.social/tags/chemistry" class="mention hashtag" rel="tag">#<span>chemistry</span></a> <a href="https://sigmoid.social/tags/expertsystem" class="mention hashtag" rel="tag">#<span>expertsystem</span></a> <span class="h-card" translate="no"><a href="https://sigmoid.social/@fizise" class="u-url mention">@<span>fizise</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@enorouzi" class="u-url mention">@<span>enorouzi</span></a></span> <span class="h-card" translate="no"><a href="https://fedihum.org/@sourisnumerique" class="u-url mention">@<span>sourisnumerique</span></a></span></p>
Harald Sack<p>During Cold War, rule-based machine translation from English to Russian and vice versa was a hot topic. However, for translating languages with rules, you have to explicitly cover an innumerable amount of exceptions. Thus, government funding for Machine Translation was cut in 1966, leading to the first AI winter.</p><p>W.J. Hutchins (1985) Machine Translation: Past, Present, and Future, Longman. p.5<br /><a href="https://archive.org/details/machinetranslati0000unse_q9u2" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">archive.org/details/machinetra</span><span class="invisible">nslati0000unse_q9u2</span></a></p><p><a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="tag">#<span>AI</span></a> <a href="https://sigmoid.social/tags/HistoryOfAI" class="mention hashtag" rel="tag">#<span>HistoryOfAI</span></a> <a href="https://sigmoid.social/tags/ISE2024" class="mention hashtag" rel="tag">#<span>ISE2024</span></a> <span class="h-card" translate="no"><a href="https://fedihum.org/@sourisnumerique" class="u-url mention">@<span>sourisnumerique</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@enorouzi" class="u-url mention">@<span>enorouzi</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@fizise" class="u-url mention">@<span>fizise</span></a></span> <a href="https://sigmoid.social/tags/lecture" class="mention hashtag" rel="tag">#<span>lecture</span></a></p>
Harald Sack<p>Two millennia after Aristotle, Gottfried Wilhelm Leibniz adopted the idea to represent knowledge with a (mathematical) universal language and proposed the calculus ratiocinator to reason over this knowledge. </p><p>G. W. Leibniz (1676), De arte characteristica ad perficiendas scientias ratione nitentes <a href="https://www.uni-muenster.de/Leibniz/DatenVI4/VI4a2.pdf#page=401" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://www.</span><span class="ellipsis">uni-muenster.de/Leibniz/DatenV</span><span class="invisible">I4/VI4a2.pdf#page=401</span></a></p><p>lecture slides: <a href="https://docs.google.com/presentation/d/1SL4CpG0jMPaCXXyqLXmuDXTn_JResItKET8PKnHlbrE/edit?usp=drive_link" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">docs.google.com/presentation/d</span><span class="invisible">/1SL4CpG0jMPaCXXyqLXmuDXTn_JResItKET8PKnHlbrE/edit?usp=drive_link</span></a></p><p><a href="https://sigmoid.social/tags/HistoryOfAI" class="mention hashtag" rel="tag">#<span>HistoryOfAI</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="tag">#<span>AI</span></a> <a href="https://sigmoid.social/tags/ISE2024" class="mention hashtag" rel="tag">#<span>ISE2024</span></a> <a href="https://sigmoid.social/tags/lecture" class="mention hashtag" rel="tag">#<span>lecture</span></a> <span class="h-card" translate="no"><a href="https://sigmoid.social/@fizise" class="u-url mention">@<span>fizise</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@enorouzi" class="u-url mention">@<span>enorouzi</span></a></span> <span class="h-card" translate="no"><a href="https://fedihum.org/@sourisnumerique" class="u-url mention">@<span>sourisnumerique</span></a></span> <a href="https://sigmoid.social/tags/leibniz" class="mention hashtag" rel="tag">#<span>leibniz</span></a> <a href="https://sigmoid.social/tags/philosophy" class="mention hashtag" rel="tag">#<span>philosophy</span></a> <a href="https://sigmoid.social/tags/calculemus" class="mention hashtag" rel="tag">#<span>calculemus</span></a></p>