Of course, pseudorandomness is pointless as a yes-no question: the answer is always ``Yes, this generator passes some statistical tests.'' But as a quantitative question---``Which statistical tests does this generator pass?''---it is an extremely useful concept.
A 1988 paper by Luby and Rackoff stole the word ``pseudorandom'' for a much narrower concept. This paper says that a generator is ``pseudorandom'' only if it passes all statistical tests within a time limit. According to that definition, linear congruential generators are not ``pseudorandom,'' and the Blum-Blum-Shub generator is merely conjectured to be ``pseudorandom.''
The narrower concept is useful in cryptography, and it deserves a name (such as ``unpredictable''). But the broader concept is also useful, and the use of ``pseudorandom'' for that concept isn't going away. Reusing the word ``pseudorandom'' for the narrower concept creates unnecessary confusion: what happens when a reader feeds a ``linear congruential pseudorandom number generator'' to a theorem about ``pseudorandom generators''?
By creating new terminological conflicts, you can encourage serious errors, annoy knowledgeable readers, and isolate your research area from the rest of the community. If you're challenged, claim that your new concept is more important than the old concept, that anyone who wants to use the old concept can say something else, and that staying compatible with ancient terminology will interfere with progress.
When the reader sees an adjective (or adjectival phrase) such as ``positive'' or ``continuous'' or ``genus-2'' or ``bounded-grimace,'' he knows that it is restrictive: every adj x is an x. Even if the reader has never seen the phrase ``genus-2'' before, he knows that the class of genus-2 hyperelliptic curves is contained in the class of hyperelliptic curves.
There are some exceptions to this rule---some non-restrictive adjectives. Here are a few examples:
You can add to these costs by introducing new non-restrictive adjectives.
Is it true that deterministic algorithms are probabilistic algorithms? Or is an algorithm required to flip at least one coin to qualify as a probabilistic algorithm?
The first possibility---deterministic algorithms are probabilistic algorithms by definition---sounds weird. It isn't how people talk. People often write things like ``this algorithm is probabilistic'' to emphasize that there are coin flips.
Other people define ``probabilistic algorithm'' without this requirement, and then state theorems about ``probabilistic algorithms'' to cover algorithms that might flip coins and algorithms that don't. This definition changes the meaning of the statement ``this algorithm is probabilistic''; it removes all content from the word ``probabilistic.''
Of course, this compatibility failure is entirely unnecessary. The broad class can be (and often is) simply called ``algorithms.'' Theorems refer to ``algorithms.'' An algorithm is ``deterministic'' if it has no coin flips. End of problem.
As another example, some papers define a broad class of ``existential forgeries'' (rather than simply ``forgeries'') and a subclass of ``selective forgeries,'' breaking the common use of the phrase ``existential forgeries'' to refer to forgeries that are not selective.
You can create further problems of this type by inserting content-free adjectives into your own definitions. Here's a model for you to follow: rather than defining a set of ``integers'' of which some are ``nonnegative integers,'' you can define a set of ``signed integers'' of which some are ``nonnegative integers.''