<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://adeebnqo.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://adeebnqo.github.io/" rel="alternate" type="text/html" /><updated>2025-12-30T03:27:36-08:00</updated><id>https://adeebnqo.github.io/feed.xml</id><title type="html">Home</title><subtitle>Academic from South Africa</subtitle><author><name>{&quot;name&quot;=&gt;nil, &quot;avatar&quot;=&gt;&quot;Zola.png&quot;, &quot;bio&quot;=&gt;&quot;South African academic. Contact via zmahlaza AT cs.uct.ac.za&quot;, &quot;location&quot;=&gt;&quot;Cape Town, South Africa&quot;, &quot;employer&quot;=&gt;nil, &quot;pubmed&quot;=&gt;nil, &quot;googlescholar&quot;=&gt;&quot;https://scholar.google.com/citations?user=vTPgEQMAAAAJ&amp;hl=en&amp;oi=ao&quot;, &quot;email&quot;=&gt;nil, &quot;researchgate&quot;=&gt;&quot;https://www.researchgate.net/profile/Zola-Mahlaza\&quot;&quot;, &quot;uri&quot;=&gt;nil, &quot;bitbucket&quot;=&gt;nil, &quot;codepen&quot;=&gt;nil, &quot;dribbble&quot;=&gt;nil, &quot;flickr&quot;=&gt;nil, &quot;facebook&quot;=&gt;nil, &quot;foursquare&quot;=&gt;nil, &quot;github&quot;=&gt;&quot;AdeebNqo&quot;, &quot;google_plus&quot;=&gt;nil, &quot;keybase&quot;=&gt;nil, &quot;instagram&quot;=&gt;nil, &quot;impactstory&quot;=&gt;nil, &quot;lastfm&quot;=&gt;nil, &quot;linkedin&quot;=&gt;nil, &quot;orcid&quot;=&gt;&quot;https://orcid.org/0000-0001-9829-1480&quot;, &quot;pinterest&quot;=&gt;nil, &quot;soundcloud&quot;=&gt;nil, &quot;stackoverflow&quot;=&gt;nil, &quot;steam&quot;=&gt;nil, &quot;tumblr&quot;=&gt;nil, &quot;twitter&quot;=&gt;&quot;nqongeveg&quot;, &quot;vine&quot;=&gt;nil, &quot;weibo&quot;=&gt;nil, &quot;xing&quot;=&gt;nil, &quot;youtube&quot;=&gt;nil, &quot;wikipedia&quot;=&gt;nil}</name></author><entry><title type="html">Noun classification thoughts II</title><link href="https://adeebnqo.github.io/post/2025-thoughts" rel="alternate" type="text/html" title="Noun classification thoughts II" /><published>2025-12-19T16:08:00-08:00</published><updated>2025-12-19T16:08:00-08:00</updated><id>https://adeebnqo.github.io/post/thoughts</id><content type="html" xml:base="https://adeebnqo.github.io/post/2025-thoughts"><![CDATA[<p>The exam period, and all its admin, is officially over. As a way of winding down, I decided to take some time and revisit the noun classification task.</p>

<h2 id="open-questions">Open question(s)</h2>

<p>In the last instalment, detailed in my last <a href="https://adeebnqo.github.io/post/2025-nounclassification">post</a>, where I was investigating the extent to which a simple multilingual model can solve noun class disambiguation for Niger-Congo B languages, I had relied on the tokenizer created by Meyer, 2024. Since then, I’ve been thinking about stripping out that tokenizer and relying on something even simpler to get a sense of how such models perform. Interestingly, after I made that post, I noted that Nakashole, 2025 has investigated the use of even more complex models on a new and open dataset (see <a href="https://github.com/okalai-ai/moimoe">https://github.com/okalai-ai/moimoe</a>).</p>

<p>The work is relevant to us because it also featured isiXhosa and isiZulu. Instead of increasing complexity and, perhaps following Nakashole’s lead, investigating a Mixture-of-Experts approach, I’ve been really wondering to what extent even simpler models are successful at this task. Specifically, I am curious to understand the extent to which can one rely on a simpler model but focus on hyperparam. optimisation to yield a good model.</p>

<h2 id="updated-model-and-question">Updated model and question</h2>

<p>To investigate this, I created a simple multilingual model using PyTorch on the same dataset used for the last post (see <a href="https://adeebnqo.github.io/post/2025-nounclassification">https://adeebnqo.github.io/post/2025-nounclassification</a>). An overview of the model, with some details dropped for simplicity, is given in the following visualisation:</p>

<p><img src="/images/SimpleSelfAttentionModel.png" alt="" /></p>

<p>The model shows some improvement from the last iteration/version as demonstrated by the following results:</p>

<table>
  <thead>
    <tr>
      <th> </th>
      <th>Prec</th>
      <th>Rec</th>
      <th>F1</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Micro</td>
      <td>0.56</td>
      <td>0.56</td>
      <td>0.56</td>
    </tr>
    <tr>
      <td>Macro</td>
      <td>0.57</td>
      <td>0.53</td>
      <td>0.53</td>
    </tr>
  </tbody>
</table>

<p>The hyperparam. search that we did was focused on tweaking the learning rate, weight decay, and betas when training the model. When we look at the final output, it is clear that the performance is still not great. Given this fact, the one question that is at the back of my head now is: while existing dictionaries are dated, do they not capture linguistic knowledge that still holds for noun classification at the current moment?</p>

<h1 id="references">References</h1>

<ul>
  <li>Meyer, 2024: <a href="https://huggingface.co/francois-meyer/nguni-xlmr-large">https://huggingface.co/francois-meyer/nguni-xlmr-large</a></li>
  <li>Nakashole, 2025: <a href="https://ndapa.us/assets/docs/papers/2025-moi-acl.pdf">https://ndapa.us/assets/docs/papers/2025-moi-acl.pdf</a></li>
</ul>]]></content><author><name>{&quot;name&quot;=&gt;nil, &quot;avatar&quot;=&gt;&quot;Zola.png&quot;, &quot;bio&quot;=&gt;&quot;South African academic. Contact via zmahlaza AT cs.uct.ac.za&quot;, &quot;location&quot;=&gt;&quot;Cape Town, South Africa&quot;, &quot;employer&quot;=&gt;nil, &quot;pubmed&quot;=&gt;nil, &quot;googlescholar&quot;=&gt;&quot;https://scholar.google.com/citations?user=vTPgEQMAAAAJ&amp;hl=en&amp;oi=ao&quot;, &quot;email&quot;=&gt;nil, &quot;researchgate&quot;=&gt;&quot;https://www.researchgate.net/profile/Zola-Mahlaza\&quot;&quot;, &quot;uri&quot;=&gt;nil, &quot;bitbucket&quot;=&gt;nil, &quot;codepen&quot;=&gt;nil, &quot;dribbble&quot;=&gt;nil, &quot;flickr&quot;=&gt;nil, &quot;facebook&quot;=&gt;nil, &quot;foursquare&quot;=&gt;nil, &quot;github&quot;=&gt;&quot;AdeebNqo&quot;, &quot;google_plus&quot;=&gt;nil, &quot;keybase&quot;=&gt;nil, &quot;instagram&quot;=&gt;nil, &quot;impactstory&quot;=&gt;nil, &quot;lastfm&quot;=&gt;nil, &quot;linkedin&quot;=&gt;nil, &quot;orcid&quot;=&gt;&quot;https://orcid.org/0000-0001-9829-1480&quot;, &quot;pinterest&quot;=&gt;nil, &quot;soundcloud&quot;=&gt;nil, &quot;stackoverflow&quot;=&gt;nil, &quot;steam&quot;=&gt;nil, &quot;tumblr&quot;=&gt;nil, &quot;twitter&quot;=&gt;&quot;nqongeveg&quot;, &quot;vine&quot;=&gt;nil, &quot;weibo&quot;=&gt;nil, &quot;xing&quot;=&gt;nil, &quot;youtube&quot;=&gt;nil, &quot;wikipedia&quot;=&gt;nil}</name></author><category term="hlt" /><category term="nguni" /><summary type="html"><![CDATA[The exam period, and all its admin, is officially over. As a way of winding down, I decided to take some time and revisit the noun classification task.]]></summary></entry><entry><title type="html">Musings on noun classification</title><link href="https://adeebnqo.github.io/post/2025-nounclassification" rel="alternate" type="text/html" title="Musings on noun classification" /><published>2025-09-16T06:46:00-07:00</published><updated>2025-09-16T06:46:00-07:00</updated><id>https://adeebnqo.github.io/post/musings-noun-classification</id><content type="html" xml:base="https://adeebnqo.github.io/post/2025-nounclassification"><![CDATA[<p>Noun classification is a significant aspect of the grammar of Niger-Congo B languages.</p>

<p>For people who are interested in learning a language that belongs to the family, you need to understand that the phenomena is one of the most important things to master since it is crucial to ensure that words agree with each other, where appropriate, in the formation of sentences. For instance, the isiXhosa sentence “hlamba izitya ngoba zimdaka!” (Wash the dishes because they are dirty) is sensible while the sentence “hlamba izitya ngoba <u>simdaka</u>” (Wash the dishes because it is dirty) is not correct despite the minor difference in the isiXhosa source text. There are a variety of resources that one can find online to learn the notion of noun classes (e.g., <a href="https://quizlet.com/za/887926330/zulu-noun-classes-flash-cards/">https://quizlet.com/za/887926330/zulu-noun-classes-flash-cards/</a>). To the best of my knowledge, these are manually created!</p>

<h2 id="existing-work">Existing work</h2>

<p>Unfortunately, from a computing perspective, modelling noun classes has been largely neglected. The challenges associated with the task may not be clear, especially to people who work on high-resourced languages. In fact, if you fit that description, you may wonder why one cannot simply rely on a dictionary, or create resources in the same vein as Wordnet or Framenet that have noun class information. The short answer is that paper dictionaries for most languages are dated and creating lexical datasets is possible, to some extent, for only a handful of Niger-Congo B languages. For instance, the resources created by Eckart et al. 2019 are a good example since the model and associated dataset has noun class information. Unfortunately, if you want to tackle the problem for all, or most, NCB languages then you have to recognise that the assumption that there is guaranteed access to ‘open’ paper dictionaries that you can use to create resources that are findable, accessible, and reusable goes out of the window! At best, you may only be able to create large datasets using paper dictionaries published in the 1800s.</p>

<p>Despite the challenges that I have highlighted, I must mention that people have been working hard on different aspects related to noun classification. For instance, there are interesting efforts aimed at pinning down the semantics of noun classes and possibly creating an ontology (Keet, 2024). We have also created noun classifiers for isiZulu (Mahlaza et al., 2025; Sayed et al., 2025) and Sepedi (Alex and Jonathan’s work on Sepedi is not published) using a variety of different methods.  A major challenge that we faced was the lack of dataset(s)!</p>

<p>Our efforts relied on relatively limited data, in terms of the languages supported, and we could not share the datasets since we had to manually extract them from recent paper dictionaries.  The effort required to extract the data, while semi-automated, still required some effort to manually clean it. With that context, I’ve been curious about the extent to which one can rely on manually annotated and open datasets extracted from the Internet as a basis for building noun class identification models. Specifically, I’ve been wondering to what extent one can rely on the SADiLaR datasets (Gaustad, 2024), as an example, to build a multilingual model for predicting noun classes.</p>

<h2 id="sadilar-data">SADiLaR data</h2>

<p>The first challenge with relying on the SADiLaR dataset(s) is that one needs to identify the nouns and their classes, since it includes a number of parts of speech. We iterated over all the words found in the datasets, extracted words that contain exactly three morphological tags and filtered out everything that does not include the following annotations: <b>NprePre</b> (i.e., the augment), <b>Bpre</b> (noun prefix), and <b>NStem</b> (noun stem). Our process produced the following data sizes:</p>

<table>
  <thead>
    <tr>
      <th> </th>
      <th>Train size</th>
      <th>Valid size</th>
      <th>Test size</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>IsiZulu</td>
      <td>702</td>
      <td>180</td>
      <td>121</td>
    </tr>
    <tr>
      <td>IsiXhosa</td>
      <td>843</td>
      <td>217</td>
      <td>145</td>
    </tr>
    <tr>
      <td>IsiNdebele</td>
      <td>765</td>
      <td>196</td>
      <td>132</td>
    </tr>
    <tr>
      <td>siSwati</td>
      <td>323</td>
      <td>83</td>
      <td>56</td>
    </tr>
  </tbody>
</table>

<p>Overall, the distribution of the nouns per class is given below:</p>

<table>
  <thead>
    <tr>
      <th style="text-align: center">IsiNdebele</th>
      <th style="text-align: center">siSwati</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: center"><img src="/images/nr-nc.png" alt="" /></td>
      <td style="text-align: center"><img src="/images/ss-nc.png" alt="" /></td>
    </tr>
  </tbody>
</table>

<table>
  <thead>
    <tr>
      <th style="text-align: center">IsiXhosa</th>
      <th style="text-align: center">IsiZulu</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: center"><img src="/images/xh-nc.png" alt="" /></td>
      <td style="text-align: center"><img src="/images/zu-nc.png" alt="" /></td>
    </tr>
  </tbody>
</table>

<p>For basic testing, let each noun be represented by an input vector, obtained by tokenising the strings using Nguni-XLMR-large (Meyer, 2024). We trained a simple Multilayer Perceptron (MLP) using the dataset to obtain a multilingual model to cater for all four Nguni languages. The performance of the model is given in the table below:</p>

<table>
  <thead>
    <tr>
      <th> </th>
      <th>Prec</th>
      <th>Rec</th>
      <th>F1</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Micro</td>
      <td>0.11</td>
      <td>0.11</td>
      <td>0.11</td>
    </tr>
    <tr>
      <td>Macro</td>
      <td>0.03</td>
      <td>0.10</td>
      <td>0.05</td>
    </tr>
  </tbody>
</table>

<p>This performance is terrible! In fact, the issue is not that we are relying on a simple model because we’ve also tried relying on the Nguni-XLMR-Large model and the results are different but equally disappointing. The issue is the dataset!</p>

<p>It is too small, the distribution of the nouns across the different classes is terribly uneven, and I doubt a model built on such data might be useful outside government documents (even if the performance was great on this small dataset).</p>

<h1 id="references">References</h1>

<ul>
  <li>Keet, 2024: <a href="https://www.utwente.nl/en/eemcs/fois2024/resources/papers/keet-preliminary-steps-toward-an-ontology-for-noun-classes-in-niger-congo-languages.pdf">https://www.utwente.nl/en/eemcs/fois2024/resources/papers/</a></li>
  <li>Mahlaza et al., 2025: <a href="https://aclanthology.org/2025.loreslm-1.35.pdf">https://aclanthology.org/2025.loreslm-1.35.pdf</a></li>
  <li>Sayed et al., 2025: <a href="https://aclanthology.org/2025.resourceful-1.23.pdf">https://aclanthology.org/2025.resourceful-1.23.pdf</a></li>
  <li>Meyer, 2024: <a href="https://huggingface.co/francois-meyer/nguni-xlmr-large">https://huggingface.co/francois-meyer/nguni-xlmr-large</a></li>
  <li>Eckart, 2019: <a href="https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.LDK.2019.17">https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.LDK.2019.17</a></li>
  <li>Gaustad, 2024: <a href="https://repo.sadilar.org/items/33a71474-0847-43d0-a125-b0a971b0bbba">https://repo.sadilar.org/items/33a71474-0847-43d0-a125-b0a971b0bbba</a>, <a href="https://repo.sadilar.org/items/aec37cd3-0e74-42ac-a578-4c71fafb3bfe">https://repo.sadilar.org/items/aec37cd3-0e74-42ac-a578-4c71fafb3bfe</a>, <a href="https://repo.sadilar.org/items/df53f13b-794b-441b-a02f-3fd673803932">https://repo.sadilar.org/items/df53f13b-794b-441b-a02f-3fd673803932</a>, <a href="https://repo.sadilar.org/items/4227c37a-f4b9-47dc-9b5e-e2e94218350b">https://repo.sadilar.org/items/4227c37a-f4b9-47dc-9b5e-e2e94218350b</a></li>
</ul>]]></content><author><name>{&quot;name&quot;=&gt;nil, &quot;avatar&quot;=&gt;&quot;Zola.png&quot;, &quot;bio&quot;=&gt;&quot;South African academic. Contact via zmahlaza AT cs.uct.ac.za&quot;, &quot;location&quot;=&gt;&quot;Cape Town, South Africa&quot;, &quot;employer&quot;=&gt;nil, &quot;pubmed&quot;=&gt;nil, &quot;googlescholar&quot;=&gt;&quot;https://scholar.google.com/citations?user=vTPgEQMAAAAJ&amp;hl=en&amp;oi=ao&quot;, &quot;email&quot;=&gt;nil, &quot;researchgate&quot;=&gt;&quot;https://www.researchgate.net/profile/Zola-Mahlaza\&quot;&quot;, &quot;uri&quot;=&gt;nil, &quot;bitbucket&quot;=&gt;nil, &quot;codepen&quot;=&gt;nil, &quot;dribbble&quot;=&gt;nil, &quot;flickr&quot;=&gt;nil, &quot;facebook&quot;=&gt;nil, &quot;foursquare&quot;=&gt;nil, &quot;github&quot;=&gt;&quot;AdeebNqo&quot;, &quot;google_plus&quot;=&gt;nil, &quot;keybase&quot;=&gt;nil, &quot;instagram&quot;=&gt;nil, &quot;impactstory&quot;=&gt;nil, &quot;lastfm&quot;=&gt;nil, &quot;linkedin&quot;=&gt;nil, &quot;orcid&quot;=&gt;&quot;https://orcid.org/0000-0001-9829-1480&quot;, &quot;pinterest&quot;=&gt;nil, &quot;soundcloud&quot;=&gt;nil, &quot;stackoverflow&quot;=&gt;nil, &quot;steam&quot;=&gt;nil, &quot;tumblr&quot;=&gt;nil, &quot;twitter&quot;=&gt;&quot;nqongeveg&quot;, &quot;vine&quot;=&gt;nil, &quot;weibo&quot;=&gt;nil, &quot;xing&quot;=&gt;nil, &quot;youtube&quot;=&gt;nil, &quot;wikipedia&quot;=&gt;nil}</name></author><category term="hlt" /><category term="nguni" /><summary type="html"><![CDATA[Noun classification is a significant aspect of the grammar of Niger-Congo B languages.]]></summary></entry><entry><title type="html">On providing feedback to students</title><link href="https://adeebnqo.github.io/post/2021-feedback" rel="alternate" type="text/html" title="On providing feedback to students" /><published>2021-03-05T05:00:00-08:00</published><updated>2021-03-05T05:00:00-08:00</updated><id>https://adeebnqo.github.io/post/feedback-for-students</id><content type="html" xml:base="https://adeebnqo.github.io/post/2021-feedback"><![CDATA[<p>Enye yezinto endinomdla ngazo zii-theory zokwakha nokunikezela iingxelo kubafundi bakho.</p>

<p>Emaxesheni amaninzi xa ufundisa, uye wakhe iingxaki okanye uzithathe kwincwadi onayo, uzenze kunye nabafundi bakho eklasini. Ngoku ndandifunda kubamanga abantsi, ootishala babesenza le nto amaxesheni amaninzi (ayekhona amaxesha apho kulindeleke ingxaki uyisombulule ngowakho njengomfundi). Eyona nto ndiyikhumbulayo ngelixesha kukuba kwakulula kwaye kumnandi ukuzama ukusombulula ezo ngxaki, eklasini ninonke kunye notishala. Kwaye, le nto yayinjalo kuzo zonke izifundo, ukuqala kwisiXhosa (e.g., xa sihlalutya imibongo umzekelo) ukuya kwiMathematika.</p>

<p><img src="https://image.shutterstock.com/image-vector/questions-concept-flat-tiny-person-600w-1627675000.jpg" alt="Class participation" /></p>

<p>Kukho into endiyiqapheleyo kodwa ekufikeni edjunivesithi: amanani wabafundi eklasini enye makhulu, into ethi kukho abafundi abangaqhelananga. Le nto ibangela abanye abafundi bangathandi ukuthetha eklasini. Isizathu esikhulu esibangela le nto, mhlawumbi, zintloni. Ndiqinisekile ukuba ingasombululeka le nto ukuba singathoba amanani eklasini nganye edjunivesithi (+ usebenzise iindlela ezahlukeneyo zokwenza abafundi bakho baqhelane. Izinto ezifana nee <a href="https://en.wikipedia.org/wiki/Icebreaker_(facilitation)">Ice Breakers</a>). Kunzima kodwa ukusombulula le ngxaki eluhlobo eMzantsi ngoba inani labafundi <a href="https://www.sanews.gov.za/south-africa/more-south-africans-higher-education">abasedjunivesithi liyenyuka</a>, kwaye ingasebenza lo nto ukuba urhulumente wakha ezinye iidjunivesithi. Ngamanye amazwi, esi simbululo sixhomekeke kubantu abanzi — ayonto ungayenza ngokwakho njengotishala edjunivesithi.</p>

<p><img src="https://image.shutterstock.com/image-vector/professor-writing-quantum-physics-formula-600w-1361866382.jpg" alt="Large classes" /></p>

<p>Mna, xa ndijonga abafundi eklasini ndibona ingathi ukuba ungabeka kwingqokelela ezintathu: abangenangxaki yokuthetha okanye ukubuza (ngqokelela yokuqala), abafunayo ukuthetha okanye ubuza — umntu angathetha ngamaxesha athile (ngqokelela yesibini), aboyikayo ukubuza okanye ukuthetha — aba bangathetha xa bengekho abanye abafundi (ngqokelela yesithathu). Le nto ivusa imibuzo emibini xa unikezela ingxelo kubafundi ngokubona kwam:</p>

<ol>
  <li>Aba bakwingqokelala 1+2, ubangcina njani bekulangqokelela?</li>
  <li>Aba bakwingqokelela 3, uzisusa njani intloni ukwenzela bafane naba bakwezinye ingqokelela? Ukuba awukwazi ukubatshintsha, ubaxhasa njani?</li>
</ol>

<p>Mna ndisebenzisa indlela ezixhomekeke kwi-theories ezixhaphakileyo. Ngenxa yokuba icala lam lophando yi <a href="https://en.wikipedia.org/wiki/Natural-language_generation">NLG</a>, ndizakuthetha ngazo ezi theory ndisebenzisa iNLG. Xa sijonga ezi theory zisetyenziswa ngabantu abakha iiNLG sistim zokwakha iingxelo, sibona intlobo ngentlobo zezinto kufuneka uzikwaqalasele. Ezi NLG sistim zakhiwa ngabantu zinentlobo ezintathu; kukho ezakhiwa ngumntu ofuna ukunikela iingxelo kubafunda abenza itutorial (e.g., Moore et al. 2004), ezineka ingxelo mayela nendlela umfundi enza ngayo kwi nto ethile (e.g., Williams and Reiter 2005 banikeza indlela abafunda+bhala ngayo), nezinekeza ingxelo mayela nento oyenza qho (e.g., Braun and Reiter 2018 banikezela iingxelo mayela nendlela aqhuba ngayo umntu). Ukhona nomsebenzi owenzwa ngabanye abantu kwelicala. Le nto ithi, xa usakha iqhinga lakho, kufuneka uyazi ukuba ungaxhomekeka ngee theory ezingafaniyo — yonke ixhomekeke kwiimeko ozibonayo ukuyo.</p>

<p><img src="https://people.cs.uct.ac.za/~zmahlaza/site/img/RoughDraft.png" alt="Possible architecture" /></p>

<p>Kuleveki, bendiqala ukufundisa icourse kwi <a href="https://en.wikipedia.org/wiki/Compiler">Compilers</a>, bendisebenzisa ezi theory xa ndisakha impendulo. Umqweno wam kukwenza lula ukubuza imibuzo eklasini, ukwakha impendulo ezicacilyo, etc. ukwenzela wonke umntu ayive le nto ifundiswayo. Into endizibuza yona kodwa ngoku kukuba ingaba ndingayimejarisha njani intsebenzo (see <a href="https://en.wikipedia.org/wiki/Efficacy">efficacy</a>) yendlela endiqhuba ngayo? Ngokubona kwam, soze ndiyazi ukuba kukho into efuna utshintshwa ngaphandle kokuba ndiyakwazi ukumejarisha. Ngaphezulu koko, ukuba ndinendlela yokumejarisha, ndingaqala ukwakha iiNLG sistim ukwenzela umsebenzi ube lula. Ndingathelekisa neeNLG sistim ezimbini (ukuba ndinexesha yokwakha ezininzi), e.g., enye ingaphendula ngesiSotho vs. ephendule ngesiNgesi.</p>

<h6 id="references">References</h6>

<ol>
  <li>Johanna D. Moore, Kaska Porayska-Pomsta, Sebastian Varges, and Claus Zinn. 2004. Generating tutorial feedback with affect. In Proceedings of the Seventeenth International Florida Artificial Intelligence Research Society Conference, Miami Beach, Florida, USA, pages 923–928. AAAI Press</li>
  <li>Sandra Williams and Ehud Reiter. 2005. Generating readable texts for readers with low basic skills. In Proceedings of the Tenth European Workshop on Natural Language Generation, ENLG 2005, Aberdeen, UK, August 8-10, 2005. ACL</li>
  <li>Daniel Braun, Ehud Reiter, and Advaith Siddharthan. 2018. Saferdrive: An nlg-based behaviour change support system for drivers. Nat. Lang. Eng., 24(4):551–588</li>
</ol>]]></content><author><name>{&quot;name&quot;=&gt;nil, &quot;avatar&quot;=&gt;&quot;Zola.png&quot;, &quot;bio&quot;=&gt;&quot;South African academic. Contact via zmahlaza AT cs.uct.ac.za&quot;, &quot;location&quot;=&gt;&quot;Cape Town, South Africa&quot;, &quot;employer&quot;=&gt;nil, &quot;pubmed&quot;=&gt;nil, &quot;googlescholar&quot;=&gt;&quot;https://scholar.google.com/citations?user=vTPgEQMAAAAJ&amp;hl=en&amp;oi=ao&quot;, &quot;email&quot;=&gt;nil, &quot;researchgate&quot;=&gt;&quot;https://www.researchgate.net/profile/Zola-Mahlaza\&quot;&quot;, &quot;uri&quot;=&gt;nil, &quot;bitbucket&quot;=&gt;nil, &quot;codepen&quot;=&gt;nil, &quot;dribbble&quot;=&gt;nil, &quot;flickr&quot;=&gt;nil, &quot;facebook&quot;=&gt;nil, &quot;foursquare&quot;=&gt;nil, &quot;github&quot;=&gt;&quot;AdeebNqo&quot;, &quot;google_plus&quot;=&gt;nil, &quot;keybase&quot;=&gt;nil, &quot;instagram&quot;=&gt;nil, &quot;impactstory&quot;=&gt;nil, &quot;lastfm&quot;=&gt;nil, &quot;linkedin&quot;=&gt;nil, &quot;orcid&quot;=&gt;&quot;https://orcid.org/0000-0001-9829-1480&quot;, &quot;pinterest&quot;=&gt;nil, &quot;soundcloud&quot;=&gt;nil, &quot;stackoverflow&quot;=&gt;nil, &quot;steam&quot;=&gt;nil, &quot;tumblr&quot;=&gt;nil, &quot;twitter&quot;=&gt;&quot;nqongeveg&quot;, &quot;vine&quot;=&gt;nil, &quot;weibo&quot;=&gt;nil, &quot;xing&quot;=&gt;nil, &quot;youtube&quot;=&gt;nil, &quot;wikipedia&quot;=&gt;nil}</name></author><category term="nlg" /><category term="isiXhosa" /><summary type="html"><![CDATA[Enye yezinto endinomdla ngazo zii-theory zokwakha nokunikezela iingxelo kubafundi bakho.]]></summary></entry><entry><title type="html">Changes to the blog.</title><link href="https://adeebnqo.github.io/post/2020-changes" rel="alternate" type="text/html" title="Changes to the blog." /><published>2020-04-07T09:00:00-07:00</published><updated>2020-04-07T09:00:00-07:00</updated><id>https://adeebnqo.github.io/post/changes-to-blog</id><content type="html" xml:base="https://adeebnqo.github.io/post/2020-changes"><![CDATA[<p>Ngoku ndandisenza iimasters ndandithanda ukubhala ngamaphepha endiwafundayo (umz. <a href="https://adeebnqo.github.io/blog/squibs-and-discussions/">Review yephepha loo-van Deemter et al.</a>).</p>

<p>Andiqondi kodwa ukuba ukhona umntu owakhe wafunda eza post. Ndiyithethiswa le nto yinto yokuba oko ndabhala kule blog, kodwa inye qha ipost eyakhe ndava abantu (ewe — abantu, hayi umntu omnye) bendibuza ngayo. Lo post yayi review yam yencwadi ka Sis’Noviolet Bulawayo. Mhlawumbi isizathu sale nto yinto yokuba akukho umntu ofuna ukubhorwa ngee reviews zamaphepha angayaziyo nokuba angena phi. Ngenxa yesosizathu, kuzakubakho utshintsho kule blog!</p>

<h1 id="into-ebendiyenza">Into ebendiyenza</h1>

<p>Into endiyaziyo kukuba ndandibhala into endiyivayo kulamaphepha. Andizange — nakanye — ndikhe ndicacise ukuba yintoni umsebenzi wam, kutheni ndicinga ubalulekile, etc. Ngaphezu koko, ndandingabhali izimvo zam ncam. Ndandibhala izishwankathelo. Andiyazi ukuba kutheni ndingazange ndiyibone ukuba imuncu njani lo nto. Ngentlahla, sigqithile apho.</p>

<p>Ngunyaka wam wokugqibela lo kule PhD ndiyenzayo kwaye ndikhulile kwindlela endifunda ngayo amaphepha. Ndiyabona ukuba lo bhubhane we Covid-19 siphila phantsi kwakhe undenze ndakhe ndacinga nge blog yam. Ngelizwi elicacileyo, undenze ndacinga ngendlela endibhala ngayo. Unndinyanzela ukuba ndicinge ukuba ngubani lo ndifuna afunde le blog kwaye ndifuna afunde ntoni!</p>

<h1 id="utshintso">Utshintso</h1>

<p>Ukuqala namhlanje, ndiyayivusa le blog. Ndifuna ukubhala ngezinto ezingaphucula wonke umntu. Ndifuna nokuncedisa nabantu abaqala izidanga zabo zee Hons., Msc., okanye iPhD. Ndizakubhala ngezizinto zilandelayo:</p>

<ul>
  <li>Yintoni i natural language generation kwaye ingasinceda njani eMzantsi?</li>
  <li>Ingxaki ongadibana nazo xa usenza nokuba sesiphi isidanga kwezi bendigqibo kuzibiza</li>
  <li>Yintoni itheory, yakhiwa njani, kwaye kutheni ibalulekile?</li>
  <li>Simulations models ezingasetyenziswa ukusombulula iingxaki endizibonayo kwindawo endizihambayo</li>
  <li>nezinye!</li>
</ul>

<p>Ndiyathemba ukuba izinto endizakuzibhala zizakubaluncedo.</p>

<p>Enkosi.</p>]]></content><author><name>{&quot;name&quot;=&gt;nil, &quot;avatar&quot;=&gt;&quot;Zola.png&quot;, &quot;bio&quot;=&gt;&quot;South African academic. Contact via zmahlaza AT cs.uct.ac.za&quot;, &quot;location&quot;=&gt;&quot;Cape Town, South Africa&quot;, &quot;employer&quot;=&gt;nil, &quot;pubmed&quot;=&gt;nil, &quot;googlescholar&quot;=&gt;&quot;https://scholar.google.com/citations?user=vTPgEQMAAAAJ&amp;hl=en&amp;oi=ao&quot;, &quot;email&quot;=&gt;nil, &quot;researchgate&quot;=&gt;&quot;https://www.researchgate.net/profile/Zola-Mahlaza\&quot;&quot;, &quot;uri&quot;=&gt;nil, &quot;bitbucket&quot;=&gt;nil, &quot;codepen&quot;=&gt;nil, &quot;dribbble&quot;=&gt;nil, &quot;flickr&quot;=&gt;nil, &quot;facebook&quot;=&gt;nil, &quot;foursquare&quot;=&gt;nil, &quot;github&quot;=&gt;&quot;AdeebNqo&quot;, &quot;google_plus&quot;=&gt;nil, &quot;keybase&quot;=&gt;nil, &quot;instagram&quot;=&gt;nil, &quot;impactstory&quot;=&gt;nil, &quot;lastfm&quot;=&gt;nil, &quot;linkedin&quot;=&gt;nil, &quot;orcid&quot;=&gt;&quot;https://orcid.org/0000-0001-9829-1480&quot;, &quot;pinterest&quot;=&gt;nil, &quot;soundcloud&quot;=&gt;nil, &quot;stackoverflow&quot;=&gt;nil, &quot;steam&quot;=&gt;nil, &quot;tumblr&quot;=&gt;nil, &quot;twitter&quot;=&gt;&quot;nqongeveg&quot;, &quot;vine&quot;=&gt;nil, &quot;weibo&quot;=&gt;nil, &quot;xing&quot;=&gt;nil, &quot;youtube&quot;=&gt;nil, &quot;wikipedia&quot;=&gt;nil}</name></author><category term="isiXhosa" /><summary type="html"><![CDATA[Ngoku ndandisenza iimasters ndandithanda ukubhala ngamaphepha endiwafundayo (umz. Review yephepha loo-van Deemter et al.).]]></summary></entry></feed>