Showing posts with label gpu. Show all posts
Showing posts with label gpu. Show all posts

GPU Computing Gems Emerald Edition (Applications of GPU Computing Series) Review

GPU Computing Gems Emerald Edition (Applications of GPU Computing Series)
Average Reviews:

(More customer reviews)
Are you looking to buy GPU Computing Gems Emerald Edition (Applications of GPU Computing Series)? Here is the right place to find the great deals. we can offer discounts of up to 90% on GPU Computing Gems Emerald Edition (Applications of GPU Computing Series). Check out the link below:

>> Click Here to See Compare Prices and Get the Best Offers

GPU Computing Gems Emerald Edition (Applications of GPU Computing Series) ReviewI have to agree with H. Nguyen. This book is a missed opportunity. GPGPU computing is new for programmers and barely even known by scientists. The entries in this book don't really show sophisticated GPGPU philosophy or idioms. You won't read this and have "aha" moments. It would be nice if the text focused on advanced uses of segmented scan (the central trick in GPGPU computing) for load balancing and allocation, and helped the reader develop a toolbox for writing their own kernels. What's really needed is a GPU replacement for basic computer science texts like Sedgewick et. al. Just learning how to add up numbers, write a sort, write a sparse matrix code, etc, near peak efficiency of the device, is a great learning experience, because you learn to think with cooperative thread array logic rather than imperative logic. Until you master that, it's not possible to write efficient GPU code. I give the contributors credit for the articles, but I think the editorship made a mistake by not giving the book a clearer and more narrow focus. Hopefully there will soon be a book that tackles ten can't-live-without algorithms and covers them in very fine detail, addressing all performance aspects of the code and showing how coupled it is to device architecture.
On the other hand I'm giving the book a second star because it does let the reader know there are others using GPGPU to solve science problems, and the topics are pretty interesting, even if the implementations are not in the GPU idiom.
The best references are still the technical docs from NVIDIA and ATI (you should read both vendor's docs even if you only deal with CUDA, as extra perspective helps), the CUDA technical forum, and the handful of research papers written by good GPGPU coders (many who work at NV now).GPU Computing Gems Emerald Edition (Applications of GPU Computing Series) Overview

Want to learn more information about GPU Computing Gems Emerald Edition (Applications of GPU Computing Series)?

>> Click Here to See All Customer Reviews & Ratings Now
Read More...

Programming Massively Parallel Processors: A Hands-on Approach (Applications of GPU Computing Series) Review

Programming Massively Parallel Processors: A Hands-on Approach (Applications of GPU Computing Series)
Average Reviews:

(More customer reviews)
Are you looking to buy Programming Massively Parallel Processors: A Hands-on Approach (Applications of GPU Computing Series)? Here is the right place to find the great deals. we can offer discounts of up to 90% on Programming Massively Parallel Processors: A Hands-on Approach (Applications of GPU Computing Series). Check out the link below:

>> Click Here to See Compare Prices and Get the Best Offers

Programming Massively Parallel Processors: A Hands-on Approach (Applications of GPU Computing Series) ReviewThis book is a much better introduction to programming GPUs via CUDA than CUDA manual, or some presentation floating on the web. It is a little odd in coverage and language. You can tell it is written by two people with different command of English as well as passion. One co-author seems to be trying very hard to be colorful and looking for idiot-proof analogies but is prone to repetition. The other co-author sounds like a dry marketing droid sometimes. There are some mistakes in the codes in the book, but not too many since they don't dwell too long on code listings. In terms of coverage, I wish they'd cover texture memories, profiling tools, examples beyond simple matrix multiplication, and advice on computational thinking for codes with random access patterns. Chapters 6, 8, 9, and 10 are worth reading several times as they are full of practical tricks to use to trade one performance limiter for another in the quest for higher performance.Programming Massively Parallel Processors: A Hands-on Approach (Applications of GPU Computing Series) Overview

Want to learn more information about Programming Massively Parallel Processors: A Hands-on Approach (Applications of GPU Computing Series)?

>> Click Here to See All Customer Reviews & Ratings Now
Read More...

CUDA by Example: An Introduction to General-Purpose GPU Programming Review

CUDA by Example: An Introduction to General-Purpose GPU Programming
Average Reviews:

(More customer reviews)
Are you looking to buy CUDA by Example: An Introduction to General-Purpose GPU Programming? Here is the right place to find the great deals. we can offer discounts of up to 90% on CUDA by Example: An Introduction to General-Purpose GPU Programming. Check out the link below:

>> Click Here to See Compare Prices and Get the Best Offers

CUDA by Example: An Introduction to General-Purpose GPU Programming Review"CUDA by example: an introduction to general-purpose GPU programming" is a brand new text by Jason Sanders and Edward Kandrot, senior members of NVIDIA's CUDA development team. This is basically the second introductory text to hit the market on general-purpose GPU programming, the first one being "Programming Massively Parallel Processors: A Hands-On Approach" by David Kirk and Wen-Mei Hwu.
The Good: it is not very common to find a technical book in this price range that is not simply in greyscale. Perhaps unsurprisingly for an NVIDIA book there's quite a bit of green, and this definitely enhances the reading experience. On a more substantive note: the authors really mean the "by example" part of "CUDA by example". From chapter 3 onward, all the main concepts are fleshed out by showing and dissecting lots of code -- probably more so than in Kirk & Hwu's text, which includes application case studies, but also more extensive treatments of the CUDA architecture. As with any example-based book, it is important to run and modify the programs while reading through the text. Right now there are a few hiccups with the files Sanders & Kandrot were kind enough to provide (e.g. as of this writing README.txt and license.txt do not have the appropriate permissions set), but I'm pretty sure these are just teething troubles which will disappear soon enough. The writing is cheerful (e.g. "For those readers who are more familiar with Star Trek than with weaving, a warp in this context has nothing to do with the speed of travel through space.", p. 106) and the explanations are for the most part clear, the language being pretty lucid -- once again, probably more so than in the Kirk & Hwu volume. This fact, along with the availability of lecture slides and lab materials for the latter book, points to the main difference between the two texts: Sanders & Kandrot are better-suited to a self-study of CUDA C, while the Kirk & Hwu book is more of a class textbook (and thus broader). Finally, I was pleased to see Sanders & Kandrot include a whole chapter (chapter 11) on working with multiple GPUs, a topic Kirk & Hwu relegate to a short section.
The Bad: having color is a welcome addition, but I could not understand why the authors chose to simply follow the text editor's default highlighting of keywords when they could have used color to highlight specific portions of the code. Similarly, a number of figures (e.g. Figs. 5.5 and 8.1) are described in the text as containing green, but they show up in greyscale. The book also contains quite a few minor typos, but that's normal; what's not normal is that every single section cross-reference outside the appendix is wrong (I counted 16 in total). Moving on to more consequential matters: Kirk & Hwu have a chapter on floating point topics; given that numerical computations are certainly part of general-purpose GPU programming, Sanders & Kandrot could have said more about them. On a different note, Kirk & Hwu have a chapter on the competing programming model OpenCL, while Sanders & Kandrot do not even have an index entry on it -- one might counter-argue here that they have knowingly put CUDA in the title. This brings me to my main gripe with this book: why didn't the authors just call it "CUDA C by example"? I believe the answer is connected to their ambivalence toward C++. An illustrative example: new and delete are used in host code only once in the entire volume (on p. 82 and p. 84, respectively), but when the code snippets are shown again (on pp. 86-87) new and delete have been silently replaced by malloc and free! In the case of device code, the authors do not discuss CUDA-supported C++ constructs like default parameters, namespaces, function templates, not to mention compute capability 2.0 things like function objects. (Structures with member functions do not beget C++). In a nutshell, the book contains too much C++ for people who only know C, and not enough C++ for those who actually use that language.
Despite these misgivings, I cannot ignore this book's low selling price (especially on the Kindle), its practical focus on a multitude of code listings, and the fact that its explanations are generally clear. Thus, I think it is an appropriate buy for those interested in learning about CUDA C.
Alex GezerlisCUDA by Example: An Introduction to General-Purpose GPU Programming Overview

Want to learn more information about CUDA by Example: An Introduction to General-Purpose GPU Programming?

>> Click Here to See All Customer Reviews & Ratings Now
Read More...