The Internet Society (ISOC) held a workshop on digital literacy, cybersecurity, and artificial intelligence (AI) on Wednesday at Avani Maseru. The event targeted legislators, law enforcement officers, and legal practitioners, aiming to strengthen digital trust, enhance data protection, and raise national awareness on cybersecurity and AI.
The workshop, supported by the Internet Society Foundation, provided hands-on training in secure digital communication, ethical data handling, and responsible technology use in public service. Participants were also guided on how to promote inclusivity and transparency in the fast-evolving digital landscape.
Speaking at the event, Neo Selematsela explained that AI stimulates human consciousness through what he termed “the synthetic use of creativity.” He traced the origins of AI back to the 1950s, noting that the technology has continued to evolve with time and now influences every sector.
Selematsela outlined two main forms of AI: traditional AI, which processes abstract information provided by users, and generative AI, which creates new content such as text or images. While he acknowledged AI’s usefulness in simplifying processes, he cautioned that it can also produce false or misleading information, known as “AI hallucinations.”
He warned that if not properly guided, AI systems may reflect cultural bias, ethical issues, and child safety risks, as the information fed into these systems often shapes their output. Selematsela further emphasised that AI models must be trained on data that respects cultural norms and does not distort national values or harm vulnerable communities.
He recommended that schools begin structured lessons on digital literacy and AI ethics to equip learners with moral and ethical awareness from an early age. “We must find balance in giving children access to technology while guiding them on its responsible use,” he said. He further stressed the importance of teaching prompt engineering to help users safely and effectively interact with AI tools, while protecting organisational and personal data.
On cybersecurity, Boitumelo Lefaphana, a security engineer, highlighted that cybercrimes account for almost 30 percent of reported crimes in Lesotho, with scam notifications increasing by 28 percent between 2023 and 2024. He identified phishing attacks, online scams, and cracked applications as major risks that expose users to data theft and financial loss.
Ezekiel Senti from the Lesotho Mounted Police Service (LMPS) noted that cyber-related crimes have become increasingly sophisticated, making it difficult for law enforcement to keep up. He said the LMPS’s primary role remains maintaining peace and security, but new threats such as cyberbullying, identity theft, and fake online investments now demand urgent attention.
Senti revealed that scammers often impersonate prominent figures such as the King or the Prime Minister to lure victims into fraudulent investments. He also pointed out that cyberstalking and online car scams are among the most common cases handled by the police. He called for stronger data protection systems and public awareness to reduce the country’s vulnerability to digital threats.
The workshop concluded with a shared understanding that Lesotho needs clear frameworks and governance structures to regulate AI and digital activities. Selematsela cited China’s regulatory approach as an example of how countries can limit harmful content and protect citizens online.
Through such collaborations, ISOC and LCA are paving the way for a safer, more informed, and digitally resilient Lesotho, ensuring that technology supports—not compromises—the nation’s moral, ethical, and cultural fabric.





