I observed an interesting behavior while playing with Google Translate today:
Not sure if it's a result of some incorrectly tagged samples or it's the underlying neural translation model has accumulated a small sense of humor:) If somehow we can shape the personality of a large neural network with the training dataset, can we say that the trained model is sort of conscious, maybe to the slightest extent?
No comments:
Post a Comment