Portal:Artificial intelligence: Difference between revisions
Line 105: | Line 105: | ||
:: [https://huggingface.co/learn/nlp-course/chapter5/1 the datasets library]: batch, DataFrame, validation, splitting, embedding, FAISS |
:: [https://huggingface.co/learn/nlp-course/chapter5/1 the datasets library]: batch, DataFrame, validation, splitting, embedding, FAISS |
||
:: [https://huggingface.co/learn/nlp-course/chapter6/1 the tokenizers library]: grouping, QnA, [https://huggingface.co/docs/tokenizers/api/normalizers normalizers], pre-tokenization, [https://huggingface.co/docs/tokenizers/api/models models],[https://huggingface.co/docs/tokenizers/api/trainers trainers]: BPE, WordPiece, Unigram, [https://huggingface.co/docs/tokenizers/api/post-processors post processors], [https://huggingface.co/docs/tokenizers/components#decoders decoders] |
:: [https://huggingface.co/learn/nlp-course/chapter6/1 the tokenizers library]: grouping, QnA, [https://huggingface.co/docs/tokenizers/api/normalizers normalizers], pre-tokenization, [https://huggingface.co/docs/tokenizers/api/models models],[https://huggingface.co/docs/tokenizers/api/trainers trainers]: BPE, WordPiece, Unigram, [https://huggingface.co/docs/tokenizers/api/post-processors post processors], [https://huggingface.co/docs/tokenizers/components#decoders decoders] |
||
:: [https://huggingface.co/learn/nlp-course/chapter7/1 main nlp tasks]: token classification, metrics, |
:: [https://huggingface.co/learn/nlp-course/chapter7/1 main nlp tasks]: token classification, metrics, perplexity, translation, summarization, |
||
:: [https://huggingface.co/learn/nlp-course/chapter8/1 how to ask for help] |
:: [https://huggingface.co/learn/nlp-course/chapter8/1 how to ask for help] |
||
:: [https://huggingface.co/learn/nlp-course/chapter9/1 building and sharing demos new] |
:: [https://huggingface.co/learn/nlp-course/chapter9/1 building and sharing demos new] |
Revision as of 19:31, 14 August 2024
Four approaches summarize past attempts to define the field:
- The study of systems that think like humans.
- The study of systems that think rationally.
- The study of systems that act like humans.
- The study of systems that act rationally.
Of these approaches, the former two are considered to be "white-box" approaches because they require our analysis of intelligence to be based on the rationale for the behaviour rather than the behaviour itself. The latter two are considered "black-box" approaches because they operationalize intelligence by measuring performance over a task domain. We prefer the latter two because they allow for quantitative comparisons between systems rather than requiring a qualitative comparison of rationales. We realize that the ultimate performance of a system will depend heavily on the task domain that it is situated in, and this motivates our preference for studying activity (behaviour) rather than thought (rationale).
Although the third approach, (known as cognitive modelling), is of great importance to cognitive scientists, we concern ourselves with the fourth approach. Of the four, this approach allows to consider the performance of a theoretical system that yields the behaviour optimally suited to achieve its goals, given the information available to it.
This approach motivates us to provide a model for our intelligent systems known as the intelligent agent.
See: Learning Projects and the Wikiversity:Learning model.
Learning materials and learning projects are located in the main Wikiversity namespace. Simply make a link to the name of the learning project (learning projects are independent pages in the main namespace) and start writing! We suggest the use of the learning project template (use "subst:Learning project boilerplate" on the new page, inside the double curved brackets {{}}).
Learning materials and learning projects can be used by multiple departments. Cooperate with other departments that use the same learning resource. Understanding AI as a field of Computer Science involves a thorough understanding of the following topics:
Remember, Wikiversity has adopted the "learning by doing" model for education. Lessons should center on learning activities for Wikiversity participants. We learn by doing.
Select a descriptive name for each learning project.
Applied project
Research projects
Readings
Wikipedia
- Artificial intelligence
- Deep belief network
- Speech recognition
- List of artificial intelligence projects
- List of datasets for machine learning research
See also
External links
- Numenta Platform for Intelligent Computing (NuPIC)
- 'WIKIPEDIA AND ARTIFICIAL INTELLIGENCE: AN EVOLVING SYNERGY' - A workshop
- Common-sense Computing @ MIT Media Lab Project page
- MLOps Wiki - A glossary of machine learning terms
- AI Research - A collection of research-based articles in AI space
Documentation, manuals
Courses
Fastfook
- https://github.com/fastai/fastbook/
- Your Deep Learning Journey
- From Model to Production
- Data Ethics
- Under the Hood: Training a Digit Classifier
- Image Classification
- Other Computer Vision Problems
- Training a State-of-the-Art Model
- Collaborative Filtering Deep Dive
- Tabular Modeling Deep Dive
- NLP Deep Dive: RNNs
- Data Munging with fastai's Mid-Level API
- A Language Model from Scratch
- Convolutional Neural Networks
- ResNets
- Application Architectures Deep Dive
- The Training Process
- A Neural Net from the Foundations
- CNN Interpretation with CAM
- A fastai Learner from Scratch
- Concluding Thoughts
- Appendix: Jupyter Notebook 101
Hugging Face Natural Language Processing
- Natural Language Processing (NLP) course
- transformer models
- using transformers: pipeline, tokenizer, AutoModel, decoding, padding, attention mask
- fine-tuning a pretrained model: preprocessing, map, dataset, dynamic padding, batch, collate function, train, predict, evaluate, accelerate
- sharing models and tokenizers: hub, model card
- the datasets library: batch, DataFrame, validation, splitting, embedding, FAISS
- the tokenizers library: grouping, QnA, normalizers, pre-tokenization, models,trainers: BPE, WordPiece, Unigram, post processors, decoders
- main nlp tasks: token classification, metrics, perplexity, translation, summarization,
- how to ask for help
- building and sharing demos new
- Artificial Intelligence: A Modern Approach, companion to the popular textbook
Open source software
- EVO - 3D artificial life simulator
- MindForth artificial mind for robots
- Open Source 3D Vision Library
- Texai English Lexicon, Fluid Construction Grammar, and RDF Entity Manager
People
- --NicholasTurnbull 04:12, 14 November 2008 (UTC)