Intro. to Synthetic Media Final!
As I mentioned on my last post for this class, I decided to train the machine using Arabic text. Last week I had the machine trained using 1MB and the results were impressive spelling wise, but not logically; sentences don’t make sense. So, for my final, I decided to increase the file size to 5.8 MB. Since I’m using a text file that’s 5x bigger, I expect better results.
I saved 1000 pages worth of Arabic text gathered from Wikipedia about different topics, but mainly information about different countries including the US, UK, Egypt, Jordan, Iraq, Kuwait and so on. I also looked up different religions, and some well-known wars that happened in the past. I had to save 5 different word documents that are 1MB each and gathered them into one big file to export them as .txt file. That is because word and google docs kept crashing every time I save more than 300 pages in Arabic.
I chose Arabic because I’m Egyptian and it’s my first language also because it’s nice to see a representation of different cultures everywhere. I expect runway to be a big thing worldwide and would love the fact that I somehow one of the first people to introduce Arabic to the program
I started training the model using 2500 steps, which is 1500 steps more than last time. It took the Runway 2 hours and I got a perplexity score of 4.49. while reading the generated Arabic text I was impressed! However, while reviewing the steps, I noticed the perplexity score is going down and there’s room for improvement, so I hit that continue training button. This time I chose to train the model at 1900 steps, mainly because I had enough credits for only 1900 steps. The perplexity score didn’t change much throughout the second training, in fact it increased to 4.61.
I asked Sohaila, the only other Arabic speaker on the floor, to read both generated texts. At first, Sohaila thought the first trained text was a piece from an actual news article. She was surprised when I told her that the text was generated by Runway. Whereas when she read the ‘extra’ trained text, she noticed that it was fake right away.
I’m very impressed of how accurate the perplexity score is; the text made more sense when the score was low. However, the spelling is accurate throughout the entire generated text.
During the process, I’ve learned that in order to maintain the shape of the Arabic letters I had to change the character encoding to ‘UTF-8’. Also, 160 pages of Arabic text is equal to 1MB size file.
If I had more time, I would polish the database a little more, I guess the reason why the generated text doesn’t make sense is because I copied many articles about different topics just for the sake of making the file bigger. I assume that sticking to one topic would generate understandable text.
I can see people using this trained text to help students with essays. Also, it can help people get instant information about a country for example, or anything for that matter.