Fetch Article Uses Too Much Memory
Currently, vB_WikiArticle::fetch retrieves all articles in the requested namespace. This is done to avoid running fetch more than once on a single page, but it also means a lot of unnecessary preprocessing and memory usage since most of the other pages will not be used. This would be especially problematic for wikis with millions of articles. There must be some way to obtain a list of needed article names ahead of time without fetching them all at once.
How about using a preg_match statement to cycle through all posts and look for links (wouldn't catch autolinks). Another option is to use the link cache, which may contain autolinks, but which may be out of date, disabled, or be slower than performing a preg_match (testing will tell). As for book and chapter links, a list of IDs can easily be compiled to fetch the items that are shown. If the fetch method becomes more generic than its current state, and we can fetch by thread ID, then we can also reduce the number of fetches that are performed (provided we already know the thread ID).
So a new class should be made:
preFetch - registers thread IDs or titles to be added to memory
fetch - current behavior, except use only the existing cache. if the item is not cached, perform a slave query
commit - retrieve the registered info and add it to memory
update - perform a db and cache update