Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Intel Core 2 'Penryn' and Linux 99

An anonymous reader writes "Linux Hardware has posted a look at the new Intel "Penryn" processor and how the new processor will work with Linux. Intel recently released the new "Penryn" Core 2 processor with many new features. So what are these features and how will they equate into benefits to Linux users? The article covers all the high points of the new "Penryn" core and talks to a couple Linux projects about end-user performance of the chip."
This discussion has been archived. No new comments can be posted.

Intel Core 2 'Penryn' and Linux

Comments Filter:
  • Perspective (Score:5, Insightful)

    by explosivejared ( 1186049 ) <hagan.jaredNO@SPAMgmail.com> on Thursday November 15, 2007 @09:51PM (#21373865)
    "There are some new instructions that could be more convenient to use in some special cases (like the new pmin/pmax instructions). But these will have no real performance benefit."

    "So we do not plan on adding SSE4 optimizations. We may use SSE4 instructions in the future for convenience once SSE4 has become really widely supported. But I personally don't see that anytime soon..."

    I think that puts the hype over penryn into perspective. There are some nice improvements energy leaks and such, but it's nothing revolutionary.
    • Re:Perspective (Score:5, Interesting)

      by SpeedyDX ( 1014595 ) <speedyphoenix&gmail,com> on Thursday November 15, 2007 @10:32PM (#21374157)
      Isn't that their strategy when they use a finer fab process anyway? I remember reading an article (possibly linked from a previous /. submission) about how they had a 2-step development process. When they switch to a finer fab process, they only have incremental, conservative upgrades. Then with the 2nd step, they use the same fab process, but introduce more aggressive instruction sets/upgrades/etc.

      I couldn't find the article with a quick Google, but I'm sure someone will dig it up.
      • Re: (Score:3, Informative)

        by DaveWick79 ( 939388 )
        I don't have a reference for this either, but this is the message that Intel regularly conveys to the channel. You can see the usage of this release strategy starting with the later Pentium 4 CPUs and it has continued through the various renditions of the Core series processors.
      • Re:Perspective (Score:5, Informative)

        by wik ( 10258 ) on Thursday November 15, 2007 @11:02PM (#21374377) Homepage Journal
        The name you are thinking of is the "tick-tock model."
      • Re: (Score:2, Informative)

        by asm2750 ( 1124425 )
        Like an earlier post said, its called the "Tick-Tock" strategy. One upgrade you improve architecture, and then the next upgrade you make the fab process smaller. Its not a bad idea, but two questions to ask is this: Could Intel hit a dead end because 16nm is the last point in the ITRS roadmap nulling this strategy in around 2013? Because once you go even smaller, you are essentially start having gates the size of atoms. And second, once quad core becomes more common will there really be any reason for consu
        • Re:Perspective (Score:4, Insightful)

          by DaveWick79 ( 939388 ) on Thursday November 15, 2007 @11:46PM (#21374725)
          I believe that by the time Quad core becomes mainstream, i.e. every piece of junk computer at Buy More has them, that 64 bit apps will also be the mainstream. By 2010 every computer sold will come with a 64bit OS, that will emulate for 32 bit programs but all the new software being developed will be transitioning to 64 bit.
          Can CPU performance hit a threshold? Sure it can. But maybe by then they will be integrating specialty processors for video encoding/decoding, data encryption, or for file system/flash write optimization, onto the CPU die. At some point nothing more will be required for corporate america to run word processors and spreadsheets, and tech spending and development will shift to smaller, virtual reality type applications rather than the traditional desktop. I think we have already reached the point where the desktop computer fulfills the needs of the typical office worker. The focus shifts to management & security over raw performance.
          • Re: (Score:3, Interesting)

            by Ours ( 596171 )
            To support what you say, Microsoft said that Vista and Windows 2008 Server where supposed to be the last OS to be available in 32-bit versions.
            • "Microsoft said that Vista and Windows 2008 Server where supposed to be the last OS to be available in 32-bit versions."
              This may be true as M$ may not ever release a new OS after the massive failure of Vista. The only way it sells is to force it down the throats of those buying new computer who don't have a clue. Even many of those buyers are buying XP to replace that piece of crap.
              • Way too many people are buying Vista to call it "a massive failure". Anyway, most of the early adopters seem to be going for 64 bit anyway, so the GGP is probably correct.
            • actually, it's only Windows 2008 Server, not Vista. Bill Laing's "a server guy", and his announcement was about the server versions, but the media misapplied it to Vista. http://apcmag.com/6121/windows_server_gets_vista_version_itis [apcmag.com]
        • Re: (Score:1, Interesting)

          Yeah, at a certain point you run up against the uncertaintly principle, but I don't think that is supposed to be anything near an issue until the later half of this century. The first limit that processors are going to hit is frequency - an electron can only move three feet during a cycle on a three gigahertz processor. While that's plenty, there are going to be problems to be resolved whenever data can't be transferred from memory in the space of a single cycle. This is unrelated to the relative speed
          • by paulatz ( 744216 )
            Yeah, at a certain point you run up against the uncertaintly principle, but I don't think that is supposed to be anything near an issue until the later half of this century.

            I think you have no idea what you are talking about. If you take Silicon as an example its crystal form has atoms separated by about 1 nm (nanometer), but if you add an impurity its effect spread on a radius of the order of 10 nm.

            So, you cannot go under 10 nanometer because separate circuits in the micro-(nano?)-processor start to i
        • by rbanffy ( 584143 )
          'cause, you know, four cores should be enough for everybody. ;-)
      • Take a look at this old image of the Intel roadmap [imageshack.us].

        Also, Intel has a tech page [intel.com] where they describe this 2 year cycle.

      • by Cyno ( 85911 )
        I am impressed with Intel's 45nm Core 2 shink improvements, already, at 5-10% boost per clock. And this is just their first Penryns. Intel has really turned things around, AMD has some catching up to do.. 2008 will be interesting.
    • Re: (Score:1, Informative)

      by jd ( 1658 )
      The greatest gains are to be made where there are the greatest latencies and least bandwidth. The CPU has not been a significant bottleneck for some time. PCI Express 2.x (which works at 5 GT/s and supports multiple roots) and HyperTransport 3 (which works at 20.8 GB/s) are obvious candidates for improving performance over the busses usually used in computers.

      RAM is another area that needs work. I mean, RAM speeds are getting very slow and caches aren't big enough to avoid being saturated by modern softwa

      • Hard drives could also be improved. If you had intelligent drives, you could place the filesystem layer in an uploadable module and have that entirely offloaded to the drive. Just have the data DMAed directly to and from the drive, rather than shifted around all over the place, reformatted a dozen times and then DMAed down

        Uhh what? Just a couple of lines above you said CPUs were overpowered, and now you want the filesystem code to run on the hard drive ? Specialized hardware maybe faster but which filesystem are you gonna have running on your hard disk? NTFS? ext3? ZFS? Reiserfs?

        • The point is that eventually it won't matter. Put the responsibility for all file stuff with the storage device. Just have a sufficiently standardized API and quick enough connection to it.

          The idea behind it is that specialized "application" processors in architected groups will comprise an "operating system" of sorts married with hardware that will be quicker than a generalized computer with a monolithic OS.
      • Re: (Score:1, Informative)

        by Anonymous Coward
        From your ramblings, I can't tell whether you don't know what your talking about or know just enough to make very confusing remarks.

        Its been a long time since raw computational power has driven CPU development. Almost all improvements are about further hiding latency. Your comments about ram, disk, etc. are all about I/O latency. This has been and will continue to get worse. Its been getting worse for decades and the majority of what CPU designers think about is how to deal with that fact.

        From your comment
    • what if you can get it with the penguin laser etched onto the top heatsink plate? And don't say "then the thermal grease wouldn't work very well" cuz that's the boring answer lol.
    • by Wavicle ( 181176 )
      There are some nice improvements energy leaks and such, but it's nothing revolutionary.

      That's true for sufficiently brain-dead definitions of "revolutionary." Hafnium based High-K transistors are revolutionary. Instruction throughput isn't everything. Manufacturing technology needs breakthroughs too. Or did you see no point in the continuous shrinkage from 100 microns down to where we are now?
    • That isn't at all surprising, I remember AMD's 3dnow, and cyrix's extensions, and how they were supposed to revolutionize things. In the end, neither did very well, and didn't ever actually live up to the hype.

      I remember when Unreal was released, it had software rendering via 3dnow, and it was far from satisfactory, and not just in resolution, turning that down still led to problems.
      • by vasqzr ( 619165 )
        The 3DNow! miniGL drivers for graphics cards would give the K6 chips a 25-30% boost in FPS, putting them right with the Pentium II at the time
    • The article only focused on video decode/encode speeds, but that is not where most of the SSE4 instruction help in speed. The newer SSE4 instructions help more in vectorization to parallelization. If the encoder/decoder does not run in parallel, then most of the new SSE4 instructions won't help. If you look at Intel's TBB, you'll see exactly where most of the newer SSE4 instructions can be used!
    • I think that puts the hype over penryn into perspective. There are some nice improvements energy leaks and such, but it's nothing revolutionary.

      Improvements in fabrication technology have nothing to do with improvements in the ISA, beyond the extent to which the ISA relies on the performance provided by the process. The process improvements in Penryn are revolutionary. 45nm on hafnium gates with a whole slew of other process changes needed to make that work is something that five years ago wasn't even believed possible - I recall gloom-and-doom predictions that the brick wall was at 65 nm.

      Practically, Penryn may be an incremental step, but the pro

  • by compumike ( 454538 ) on Thursday November 15, 2007 @09:51PM (#21373869) Homepage
    In the article, the authors of XviD and FFMPEG, aren't too optimistic about speedups. If video encoding/decoding is the bottleneck, then why not start building motherboards with a dedicated chip specialized for this kind of work, instead of trying to cram extra instructions into an already bloated CISC CPU? Doesn't make sense to me.

    Also, an earlier comment that may be useful in this discussion: Why smaller feature sizes (45nm) mean faster clock times. [slashdot.org]

    --
    Educational microcontroller kits for the digital generation. [nerdkits.com]
    • by 644bd346996 ( 1012333 ) on Thursday November 15, 2007 @09:59PM (#21373911)
      The place for hardware decoders is on the graphics card. Hence the reason why Linux needs to use the CPU.
      • by samkass ( 174571 )
        One could argue that the place for graphics cards is on the CPU. What else are you going to do with all that extra silicon real estate?
        • Re: (Score:3, Interesting)

          Some workloads benefit from vector processors, and some don't. For now, it is best economically to keep vector co-processors separate from CPUs, and use the advances in chip tech to lower power consumption and add more cores to the CPU.

          For example, many server workloads are handled best by a chip like Sun's UltraSparc T1, which doesn't have any floating point capabilities worth mentioning. People running that kind of server wouldn't buy a Xeon or Opteron that had a 600M-transistor vector processor. It's a h
          • by samkass ( 174571 )
            In that case, it is much more economical if you can upgrade the vector processor without throwing away a perfectly good CPU.

            Is it? How much does that slot, bus, southbridge, etc., cost? CPUs are cheap! Certainly cheaper than most graphics cards. And the proximity to L1/L2 cache and computational units might make for some interesting synergy.
            • Every two years or so, when you goto upgrade, there is a new socket design or some limitation with the existing north bridge/south bridge chipsets that require you to buy a new main board anyways. So some of the components you listed might be spent already.

              With PCI express and the bandwidth it can handle, it might be the best option to put it either on a separate daughter card or allow a separate video card to be installed and dedicated specifically to this. Either way, processor, daughter cards, or video c
      • The place for hardware decoders is on the graphics card. Hence the reason why Linux needs to use the CPU.

        Why? If you're going to be displaying the video on screen, then yeah, it makes sense to have it on the graphics card. But why can't we just have a general-purpose codec card? What if I don't want to display video, I just want to encode/decode it? Surely this is such a fundamental need that it deserves its own chip. If they can fit an encoder into a 1-pound handheld digital camcorder, why can't they p

    • by Vellmont ( 569020 ) on Thursday November 15, 2007 @10:14PM (#21374005) Homepage

        instead of trying to cram extra instructions

      Cram? Chip designers get more and more transistors to use every year. I don't believe there's any "cramming" involved.
      into an already bloated CISC CPU?
      You're about 15 years out of date. The x86 isn't exactly a CISC CPU, it's a complex instruction set that decodes into a simpler one internally. Only the intel engineers know how they added the SSE4 instructions, but based on the comments of the encode/decode guys, these new instructions sound a lot like the old instructions. It's not too hard to imagine that they didn't have to change much silicon around, and maybe got to re-use some old internal stuff and just interpret the new instructions differently.

      Anyway, so why not just have a dedicated piece of silicon for this exact purpose? Partly because it'd be more expensive (you'd have to basically implement a lot of the stuff already on CPU like cache, etc), but also because it's just too specific. How many people really care about encoding video? 5% of the market? Less?

      Hardware decoding on hardware is already a reality, and has been for some time. GPUs have implemented this feature for at least 10 years. But of course it's generally not a feature that has dedicated silicon, it's integrated into the GPU. If this is the first you've heard of it, it's not surprising. The other problem with non-CPU specific accelerations is they don't ever really become standard, as there's no standard instruction set for GPUs, and ever a GPU maker may just drop that feature in the next line of cards.

      In short, specialized means specialized. Specialized things don't tend to survive very well.
      • Re: (Score:2, Insightful)

        by jibjibjib ( 889679 )
        How many people really care about encoding video? 5% of the market? Less?

        I don't know why you seem to think video encoding is some sort of niche technical application that no one uses. A huge number of people record video on digital cameras and want to email it or upload it without taking too long. Many people now use Skype and other VOIP software supporting real-time video communication. Many people rip DVDs. Many people (although not a huge number) have "media center" PCs which can record video from TV

      • Re: (Score:2, Interesting)

        by xorbe ( 249648 )
        > Cram? Chip designers get more and more transistors to use every year. I don't believe there's any "cramming" involved.

        Someone is definitely not a mainstream CPU designer! It never all fits... ask any floor-planner.
      • x86 not CISC?! (Score:5, Interesting)

        by porpnorber ( 851345 ) on Friday November 16, 2007 @03:21AM (#21375881)

        x86 has a hella complex instruction set, and it's decoded in hardware, not software. On a computer. So: it's a CISC. A matter of English, sorry, not religion. Sure the execution method is not the ancient textbook in-order single-level fully microcoded strategy - but it wasn't on a VAX, either, so you can't weasel out of it that way. ;)

        Of course, the problem isn't with being a CISC, anyway. Complex instruction sets can save on external fetch bandwidth, and they can be fun, too! It was true 25 years ago, and it's still true now. CISC was never criticised as inherently bad, just as a poor engineering tradeoff, or perhaps a philosophy resulting in such poor tradeoffs.

        The real point is twofold, and this: first, that the resources, however small, expended on emulating (no longer very thoroughly) the ancient 8086 are clearly ill-spent. While this may have come about incrementally, it could all by now be done in software for less. And second, while don't write assembly code any more, we do still need machines as compiler targets; and a compiler either wants an ISA that is simple enough to model in detail (the classic RISC theory) and/or orthogonal enough to exploit thoroughly (the CISC theory). Intel (and AMD, too, of course; the 64 bit mode is baffling in its baroque design) gives us neither; x86 is simply not a plausible compiler target. It never was, and it's getting worse and worse. And that is precisely why new instructions are not taken up rapidly: we can't just add three lines to the table in the compiler and have it work, as we should be able to do; we can't just automatically generate and ship fat binaries that exploit new capabilities where they provide for faster code, as must be possible for these instruction set increments to be worthwhile.

        Consider, for example, a hypothetical machine in which there are a number of identical, wide registers, each of which can be split into lanes of any power of two width; and an orthogonal set of cleanly encoded instructions that apply to those registers. CISCy, yes, but also a nice target that we can write a clean, flexible, extensible compiler back end for. Why can't we have that, instead? (Even as a frikkin' mode if back compatibility is all and silicon is free, as you appear to argue!)

        It shouldn't be a question of arguing how hard it is or isn't for the Intel engineers to add new clever cruft to the old dumb cruft, but one of what it takes to deploy a feature end-to-end, from high level language source to operations executed, and how to streamline that process.

        So, sure, give us successive extensions to the general-purpose hardware, but give them to us in a form that can actually be used, not merely as techno-marketroids' checklist features!

        • by Jay L ( 74152 )
          Thanks for the explanation. I've been wondering why, in late 2007, everything I see is still optimized for i686 (or even i586). I upgraded from Core to Core2, and couldn't figure out why I didn't need to recompile everything to take full advantage; that's when I noticed that Tiger was still using a gcc that didn't even have Core!

          It sounds like Penryn has a bunch of slightly-neat features that we'll start taking advantage of sometime in 2025.
          • Heck, for my Core 2 Duo, I need to use the -march=nocona flag for compilation, and if I can recall correctly, that was originally added for the Prescott or similar: "Improved version of Intel Pentium4 CPU with 64-bit extensions, MMX, SSE, SSE2 and SSE3 instruction set support." Though, I doubt the software interface to the CPU is that different.

            - Neil

        • If you want orthogonal, why not use an existing non-intel CPU?

          Part of the problem is that we (still) don't really know how to design a CPU that is easy to compile fast code for (e. g., in all situations).
          • Intel wins in the market partly because it rides on Microsoft's coat tails (why Microsoft wins in the market is another long story, of course), and partly because it has fabrication technology that is actually sufficiently better than the competition to dominate most negative effects of architectural decisions. That in turn is because of economies of scale, and I understand was originally bootstrapped through their memory business, rather than by CPUs as such. And if it wins so utterly in the market, then i

      • Re: (Score:1, Insightful)

        by Anonymous Coward
        >How many people really care about encoding video? 5% of the market? Less?

        Anyone who wants to video conference and doesn't have tons of bandwidth.
      • Re: (Score:2, Funny)

        Hardware decoding on hardware is already a reality, and has been for some time.

        As opposed to hardware decoding on software?

        Or redundant redundancies of redundancy?

    • If you are really interested in encoding video, I would think that you would have a specialized chip. My TV Tuner has a specialized chip for encoding mpeg 2, which means it can encode 12 mbit/s mpeg2 without putting any noticeable load on my processor. I'm sure it wouldn't be too difficult to build a chip specifically to encode video into MPEG 4.
      • I'm sure it wouldn't be too difficult to build a chip specifically to encode video into MPEG 4.

        It's probably not, but as far as I'm aware, Thomson's Mustang ASIC is the only commonly available one.

        Most hardware video encoding is done with general-purpose DSPs and specialised software.

    • Re: (Score:1, Interesting)

      by Anonymous Coward
      It's strange that XviD doesn't think SSE4 does much for video but Intel trots out DivX6 as the show pony for SSE4 optimization and speedup.
  • by hattable ( 981637 ) on Thursday November 15, 2007 @09:59PM (#21373913) Journal

    "So we do not plan on adding SSE4 optimizations. We may use SSE4 instructions in the future for convenience once SSE4 has become really widely supported. But I personally don't see that anytime soon..."

    This just reminds me of CONFIG_ACPI_SLEEP. About 2 times a month I am staring at this option wondering if I will ever get to use it. Some things just are not worth developer time to implement.
  • Am I the only one who read Penryn as penguin?
  • Remember MMX ? (Score:4, Informative)

    by 1888bards ( 679572 ) on Thursday November 15, 2007 @10:07PM (#21373969)
    I seem to remember Intel doing this when they release the 'first' MMX instructions in the pentiums, that time they had actually doubled the L1 cache from 16k to 32k in the new pentiums, but they somehow managed to convince/fool everyone that the performance was as a result of MMX. Very sneaky/clever.
    • MMX Only allowed extra Multi-media capabilities, so was still for the most part pretty useless on most machines, especially in a bus environment. More Intel marketing.
      • MMX was not useless. Despite its marketing name, it didn't have a whole lot to do with multimedia (though it did have obvious applications in multimedia). It was x86's initial introduction to vector/SIMD instructions. The ability to perform the same instruction to 4 numbers at once (rather than using a loop) was a huge boon. Intel might have marketed it strangely, but to some degree it was Intel playing catch-up to other architectures which had already added vector instructions.

        It's true though that we di

  • by ocirs ( 1180669 ) on Thursday November 15, 2007 @10:19PM (#21374041) Homepage
    These guys are pretty much saying that they don't really intent to optimize the code for penryn because very few processors will have SSE4, and even then they don't expect much performance improvement. I'm still waiting for decent 64-bit drivers for half of my hardware........ most early adopters pay a premium for features that aren't really utilized at first, and by the time the software catches up the hardware is dirt cheap. However penryn(except for the extreme edition) is an exception since it is priced at a point where it is worth it to pay the extra buck or two for the extra features that are not going to have much impact till years later when the software catches up. I'm really looking forward to Nehalem though, the architecture update is going to bring significant improvement in performance without much to do with software optimization.
    • by DAldredge ( 2353 )
      What hardware do you have that doesn't have 64 bit drivers?
      • by ArcherB ( 796902 ) *
        What hardware do you have that doesn't have 64 bit drivers?

        Adobe Flash and Opera.

        OK, it's not hardware or even drivers, but it's enough to make me regret installing 64-bit Ubuntu.

      • by ocirs ( 1180669 )
        Emphasis on DECENT. My nVidia 7900GT driver still doesn't let me have a dual monitor setup without completely messing up the display on one screen, and it's STILL not fixed yet. The creative sound drivers suck badly, some of my games still can't utilize the 3d features of the sound card, and the sound quality is crap, I was better off using my onboard sound card. There wasn't a driver for my dlink 530tx gigabit card for a really long time, there's a driver out now but I can't access half the features like r
  • Intel bothered to add crc-32 but they didn't add md5 or sha-1: wtf?
    seriously, sha-1 in microcode would be hella fast
    • Re: (Score:1, Informative)

      by Anonymous Coward
      md5 and sha1 are already compromised algorithms. crc32 is not supposed to be obsoleted and discarded by a discovery and research.
      • by vidarh ( 309115 )
        For most uses of md5 and sha1 in modern applications it makes very little difference that it's possible to manufacture collisions. That said, I'd really like to know what kind of applications requires enough md5 and sha1 generation steps that it causing enough load to be worth dedicated instructions.
    • My 1,2 Ghz (C7) Epia Board runs a 28Mbyte/s file server over Gigabit LAN - with transparent AES decryption (dm-crypt).... :)
  • by tyrione ( 134248 ) on Thursday November 15, 2007 @11:56PM (#21374799) Homepage
    makes a lot more sense with these latest processors. Sure the SSE 4 instructions won't be that immediately useful to Linux. They sure as hell will be for OS X Leopard.
    • LLVM == Hot Air (Score:1, Interesting)

      by Anonymous Coward
      Since 2002 we've been hearing about LLVM. It sounds as something generally "vaguely very cool TM", so when you download it it's a bunch of "optimization strategies framework, etc, etc". Still nothing practical though. When you can compile the kernel with LLVM, it works and it is not 50 times slower than gcc, wake me up. It seems that in the best case scenario LLVM would require at least 7 more years of heavy development before it gets there, if ever at all. If you want to invest in hot airs and arrange your
      • Re:LLVM == Hot Air (Score:5, Informative)

        by pavon ( 30274 ) on Friday November 16, 2007 @01:23PM (#21381595)
        Apple used LLVM to improve the performance of software-fallbacks for OpenGL extensions by a hundred fold [arstechnica.com] in Leopard, and the big part of that was because it was good at optimizing high-level routines depending on the low-level features of the chip, such as Altivec/SSE2 32bit/64bit, PPC/x86 etc. So it stands to reason that, to the extent that SSE4 is usefull, LLVM will make good use of it, just like it did for other extensions.

        That sounds pretty practical to me.
        • Apple used LLVM to improve the performance of software-fallbacks for OpenGL extensions by a hundred fold [arstechnica.com] in Leopard, and the big part of that was because it was good at optimizing high-level routines depending on the low-level features of the chip, such as Altivec/SSE2 32bit/64bit, PPC/x86 etc. So it stands to reason that, to the extent that SSE4 is usefull, LLVM will make good use of it, just like it did for other extensions.

          If a new compiler frontend/backend/whatever improved the performance of those routines 100x, it's because the original routines were horribly inefficient. That is a simple fact and it is still true even when Apple is involved.

          • by tyrione ( 134248 )

            If a new compiler frontend/backend/whatever improved the performance of those routines 100x, it's because the original routines were horribly inefficient. That is a simple fact and it is still true even when Apple is involved.

            Apple is driving the costs behind LLVM. They are accelerating it's development goals and no GCC was not capable of providing the improvements to OpenGL and Quartz that Apple needed.

            Apple buys CUPS and makes sure it remains the same licensing while paying the salaries of its develo

  • All that matters is, does it run Linux?

    It does, end of discussion. Everything else is simply about applications.

To be awake is to be alive. -- Henry David Thoreau, in "Walden"

Working...