Learning and Applying AI Concepts the Easy Way

Learning and Applying AI Concepts the Easy Way

Recently, I was speaking with Tigmanshu Bhatnagar about AI education and how staff from humanitarian organizations are interested in learning more about artificial intelligence and machine learning. But getting started in pursuing a learning journey or even a teaching journey can feel daunting especially for individuals who aren't proficient in computer and data sciences.

On a side note, perhaps as a result, maybe this is why our sector is seeing a lot of discussion and learning initiatives emerge focusing on non-technical thematics like AI Policy and Responsible AI which are still valuable to get to know but easier to teach and digest.

However learning about "bias" for example, can benefit from technical context. This is something that I pointed out to Tigi during our discussion. The technical context is actually quite important and necessary to understand, to build un-biased applications. Because of this, it's useful for non-technical humanitarian staff to start their learning journeys by learning about things like text encoding and vectorization and learning about these type of things doesn't have to be difficult.

For fun, one can find or generate a list of common AI connected terms and learn something basic about each one. Then navigate to Perplexity.ai and ask something like: "What might be useful for someone who works for a humanitarian organization to understand about text encoding relevant to AI applications?" In return you'll get a nice summary and some information relevant to humanitarian applications of AI. This is what I call "the easy way" to learn some AI concepts and get a feel for their contextual value.

Of course one can work from easy or more technical and extensive lists of terms. Here's a detailed one for reference: https://www.expert.ai/glossary-of-ai-terms/

A simple

The above is just an illustration. My point though is that it is crucial to streamline AI education for staff from humanitarian organizations in ways that do nevertheless sufficiently dig into the technical vocabulary, concepts, code and the math behind AI. Doing this and generating the content can be easier than we think however as a take-away.


Wayan Vota

Digital Development Leader - Accelerating Engagement and Impact with Communities Worldwide

6mo

Brent O. Phillips I agree that development workers need to understand how something works in our to fully master it, and yet I think we may be pushing against human nature. There will always be a minority who can train a LLM and a majority that will say "yeah, yeah, whatever, can I get the answer I need now?" and not bother with the workings. I am reminded of software engineers who know the details of Windows, Office, etc. (or LLMs), but are managed by CIOs (who can't code their way out of a paper bag) who have greater decision levels and organizational controls.

Like
Reply
Wayan Vota

Digital Development Leader - Accelerating Engagement and Impact with Communities Worldwide

6mo

Brent O. Phillips small but important point on your article: I don't believe we can build an unbiased LLM, or an unbiased anything, actually. We are biased as humans, and therefore we can reduce bias, but never eliminate it. Or as John Gim explained last week at an AI in Asia conference, there is a trade off between bias and fairness - the less bias the more unfair (as defined as conforming with established rules).

Like
Reply

To view or add a comment, sign in

More articles by Brent O. Phillips

Insights from the community

Others also viewed

Explore topics