@mikewalsh Yes that is the one. There is very little known about it other than the spider can't handle https. Not much is written about it either other than the few things found in forums for web masters and admins.
After researching the information found, the conclusion for me is this spider is possibly looking for specific code that is another stage in gaining control of servers.
Last night a turned the rewrite engine on to allow murga-linux links to work again and so I could monitor the incoming traffic specifically coming through this domain. At first all was normal but after an hour the requests from the spider began to mount and that request is the same incomplete URL so no actual topic or post is being probed. Before long I could not longer login to the server via SSH. No more available memory, cache's overflowing so I turned off the engine and the mounting problems instantly disappeared.
I am leaving our rewrite system in place for a future time but for now the engine is turned off. Maybe we can find out more about the The Knowledge AI.
There is a robot.txt in place with rules to disallow a search through the domain name by the spider but it's open on how effective this will be.