I have a brand new 4" refractor which is my first "real" telescope (as a kid I had a let's-say-3" Newtonian, your typical shopping-mall refractor, and I've had some decent binos and spotting scopes since then).
The world of amateur astronomy has vastly changed since I was a kid. Alan Zeichick called something "the greatest innovation since the big dob, the go to scope, and the SC." All innovations which post-date my childhood! (Well, I think Schmidt-Cassegrain's were hitting the scene?) One of the options when buying a telescope today is a "go to" computer which knows the orientation of your tripod and your latitude. You type in what object you want to look at (Jupiter, the Andromeda Galaxy, the Wild Duck star cluster), move your scope around until the numbers on the computer read "0" and then look through the eyepiece ("Wow, there it is.")
Such computers aren't cheap and when researching your purchase, you'll get a lot of "computers are all well and good, but ultimately, star-hopping is both effective and satisfying." So, like me, you might decide to kick it old school, especially if, like me, you think "Gee, I know several constellations and can pick out Andromeda if I can see Cassiopeia."
What I've concluded, after two nights of abject failure, is that (a) I'm an idiot and (b) star hopping is like programming without an IDE. The "I'm an idiot" aspect is simply reinforcing data that's been accumulating for some time, so let's skip over that.
It's very difficult for an expert to anticipate what will baffle a newcomer. In the case of star-hopping, an expert won't blink at "look 2 degrees SW of a hook-shaped formation found 5 degrees along a line defined by Alpha and Theta." In the case of programming, an expert doesn't need a tree of user-defined objects and methods taking up screen space. And the challenge to the expert is compounded because it's not remembering what was hard that's the problem ? "Go To" scopes have only come along recently, IDEs have been around for 25 years but it's only been about a decade since I think they surpassed the command-line (the breakthrough, I think, was the refactoring IDE). For the cost of a "go to" mount, I could get two or three high-quality eyepieces. For the cost of an IDE (even if it's just the time spent mastering an OS IDE) you could learn a different language or library. As a newcomer, you face two different payoff curves (n.b.: not the same as a learning curve!):
The expert might say "Oh, eventually, you'll appreciate the work of the slower, more 'full-bodied' learning curve:"
But even if you accept that curve, the issue of what to do is still difficult. You actually have to integrate under the curve:
There's some period of time when the "easy" approach is more satisfactory. During that time, you are accumulating a surplus of satisfaction (the area 'A' in the above illustration). Ultimately, the "hard" approach may provide more satisfaction at a given moment, but there's still a "catch up" period ('B') where your total satisfaction is still less than the total satisfaction with the "easy" approach (in a sense, you have to pay off a debt you've incurred). It's only when you get to 'C' that the slower, harder approach really pays off.
To a professional, who moved through the 'A' and 'B' periods at a young age, 'C' dominates the curve and it seems natural to say "Slow and steady wins the race." But if you don't spend a long time in 'C' then the "easy" route is ultimately smarter. And, relevant to software developers, you are unlikely to be writing code at 65. If you're a developer in the first world, you're unlikely to be writing code at 45 or maybe even 35. The salary pressure from the \<a href="http://en.wikipedia.org/wiki/BRIC"" target="_blank" rel="noopener noreferrer">BRIC economies is too great. The finish line is closer than you think.
As to "go to" mount I still don't know what to do.