A) 1. I used headlines from 4 different news sources: CNN, NPR, Fox, and Al Jazeera. These were the headlines used: CNN: ‘Tech titans face Congress over their sheer power’ ‘GOP congressman who frequently refuses to wear a mask tests positive’ ‘US to withdraw nearly 12,000 troops from Germany in move that will cost billions’ NPR: “Oregon Gov. says Federal OFficers will begin phased withdrawl from Portland” ‘Why the novel coronavirus has the power to launch a pandemic’ FOX: ‘McConnell suggests Dems are trying to sabatoge the next coronavirus bill’ ‘British family on vacation finds migrants hiding in their cars rooftop cargo box’ ‘Pakistani man on trial for blasphemy shot dead in court’ Al Jazeera ‘Kenya arrests man wanted for illegal ivory trading’ ‘Iran fires underground ballistic missiles for first time’
The ones labeled most sarcastic were ‘GOP congressman who frequently refuses to wear a mask tests positive’ and ‘Iran fires underground ballistic missiles for first time’ at 0.339 and 0.944 respectively. The least sarcastic was “Oregon Gov. says Federal OFficers will begin phased withdrawl from Portland” at 9.46x10^-11
CNN and Al Jazeera had the most sarcastic headlines whereas NPR had the least sarcastic. Overall CNN had higher values meaning their headlines were usually more sarcastic.
B) 1. The RNN model uses the last object, in this case a letter, to predict the next one. In will then use the last two objects to create the next one and so on. It learns what letter comes after the next and remembers them as probabilities. Because the model is based on probabilities and the elements to start the prediction, The first letter is crucial and can change the outcome of the whole prediciton if changed, like it was running my two predictions: one on Romeo and the other on Juliet.
C)
- The translations turned out well as their main ideas were accuract to the original idea of the english sentence. However, both translations offered synonyms for the original english word. For example, it replaced cheesecake with cake and gorgeous with pretty. It also failed to translate one sentence due to a dictionary issue (estera = mat). Overall, the translations were still good and the dictionary may need to be increased in order to translate more words and sentences.