Required reading:
- Starting out
- Forte, J. (2024). Introduction to GPT-4o and GPT-4o mini. https://cookbook.openai.com/examples/gpt4o/introduction_to_gpt4o
- Sanders, T. & Heaton, M. (2022). Question answering using embeddings-based search. https://cookbook.openai.com/examples/question_answering_using_embeddings *Prompt engineering
- Open AI. Prompt Engineering. https://platform.openai.com/docs/guides/prompt-engineering
- Llama. Prompting. https://www.llama.com/docs/how-to-guides/prompting/
- Embeddings
- Open AI. Embeddings. https://platform.openai.com/docs/guides/embeddings
- Structuring
- Yurtsev, E. (2023). Kor. https://eyurtsev.github.io/kor/index.html#
- If you are comfortable with programming:
- Gil Guzmán, K. (2024). Structured outputs. https://cookbook.openai.com/examples/structured_outputs_intro
- LLM applications with LangChain
- IBM Technology (2024). LangChain explained. https://www.youtube.com/watch?v=1bUy-1hGZpI
Further reading:
- Ngo, R. (2021). A short introduction to machine learning. https://www.alignmentforum.org/posts/qE73pqxAZmeACsAdF/a-short-introduction-to-machine-learning
- 3Blue1Brown (2017). But what is a neural network? https://www.youtube.com/watch?v=aircAruvnKk
- Codebasics (2021). What is self-supervised learning? https://www.youtube.com/watch?v=sJzuNAisXHA&feature=youtu.be
- Yan, Z. (2024). Evaluating the Effectiveness of LLM-Evaluators (aka LLM-as-Judge). eugeneyan.com. https://eugeneyan.com/writing/llm-evaluators/
- Barrie, C., Palmer, A., & Spirling, A. (2024). Replication for Language Models Problems, Principles, and Best Practice for Political Science. URL: https://arthurspirling. org/documents/BarriePalmerSpirling TrustMeBro. pdf.
- Zhou, Y., Muresanu, A. I., Han, Z., Paster, K., Pitis, S., Chan, H., & Ba, J. (2022). Large language models are human-level prompt engineers. arXiv preprint arXiv:2211.01910.
- OpenAI resources. https://cookbook.openai.com/articles/related_resources