The algorithm would begin by breaking up the given text into individual words using tokenization. Then, it would compare each word to a dictionary of known words and identify any words that do not appear in the dictionary. The algorithm would also use a set of predefined rules to detect any unexpected grammar or syntax in the text. Finally, it would also use a set of machine learning algorithms to detect any inconsistencies in the text compared to expected patterns in natural language.