Support for ATI Radeon GPUs
Message boards :
Wish list :
Support for ATI Radeon GPUs
Message board moderation
Author | Message |
---|---|
Send message Joined: 28 Apr 13 Posts: 87 Credit: 26,717,796 RAC: 102 |
You can get it in germany, http://www.pollin.de/shop/suchergebnis.html?S_TEXT=odroid&log=internal |
Send message Joined: 7 Dec 12 Posts: 87 Credit: 3,230,840 RAC: 84 |
|
Send message Joined: 4 Apr 14 Posts: 23 Credit: 197,760 RAC: 0 |
@ alexander If I could choose between two similarly priced minicomputers that have comparable performance I'd choose one that runs Linux, simply because all projects support this OS. Your last link is interesting and almost convinced me. I see that it has DDR2 (Q7 has DDR3). Do you know if it’s a 28nm CPU? This is first time I’ve heard of odroid. Are there any other minicomputers that support Linux? @ Bazso You are right, it will be warm, but I’m guessing also less likely to overheat and I'm willing to go down to 80% if necessary. The Q7 has a larger casing than a USB stick and more ventilation holes. http://www.geekbuying.com/item/Q7-TV-BOX-RK3188-Bluetooth-v4-0-Android-4-2-External-Wifi-Antenna-Ethernet-Port-with-2-4GHZ-wireless-Air-Mouse--Black-324259.html |
Send message Joined: 7 Dec 12 Posts: 87 Credit: 3,230,840 RAC: 84 |
Last modified: 6 Jun 2014, 18:42:24 UTC All Android TV sticks can also run Linux. They are meant to run Android by default, but there are several enthusiasts out there who make Linux images for them. In some cases it's not so easy to get it going, but if you're motivated and don't mind reading some forums, you'll get it working ;) About overheating. Some TV sticks and other Android boxes have this issue more than others. But under constant load they all become very hot, even at 50%. But this might not be such a huge problem. If they stay below 80 degrees Celsius, they can work like that without getting damaged. Just don't burn your finger ob them :) Actually the RK3188 has thermal throttling function, just like the Odroid chips. If they become too hot, they start reducing clock frequency or even shut down completely, but they won't get damaged. If you still want more peace of mind, they can be cooled both actively and passively. I'm using an active VGA cooler with a small resistor to reduce it's RPM. It'so silent that I have to put my ear on it to hear it working. I'm not saying that Android TV sticks are the alfa and omega, I'm just saying that they are super cheap for the performance they offer and that in my opinion it's not worth to buy other ARM devices for BOINC usage. They can run Linux, they can run Android, they do the job well. Still, the best performance/price ratio comes from muscular x86 processors, like the Intel Core i5 4670K. You can't beat that :) Obviously they use a bit more power than ARM devices (for equivalent performance), but it's not such a huge difference. Probably 30-50% at most (rough guess based on my own measurements). |
Send message Joined: 28 Apr 13 Posts: 87 Credit: 26,717,796 RAC: 102 |
Last modified: 6 Jun 2014, 20:50:38 UTC @ alexander This device runs android 4.x and Linux; you simply change the micro SD Card. I crunched three months for Einstein, using a Lubuntu Installation. The Images for two Android versions and 2 Linux versions are free downloadable from the odroid forum. There is also a cyanogen image available for that device. This means, you can load a vnc server and run the device without keyboard, mouse or monitor via you pc. edit: sorry, no idea about 28nm or more, it is a simple device, works, there are different images available, that was enough to make my decision. @ Andras: Yes, of course, the device is overclocked and needs additional cooling. I made my device watercooled. But for shure, there are other ways to keep it cool enough. My Odroid runs since last August 7/24, sometimes Android, sometimes Lubuntu. If you need more ideas for cooling these devices, google for JagDoc He's member of Planet 3DNow! and has a Odroid Farm! Edit: If someone is interested in the performance of these arm devices, goto nativeboinc.org and take a look into the statistics for different projects. |
Send message Joined: 28 Apr 13 Posts: 87 Credit: 26,717,796 RAC: 102 |
All Android TV sticks can also run Linux. They are meant to run Android by default, but there are several enthusiasts out there who make Linux images for them. In some cases it's not so easy to get it going, but if you're motivated and don't mind reading some forums, you'll get it working ;) Linux is not always the best choice. Einstein for example was much faster with Android than with Lubuntu ( 8 hrs versus 11.5 hrs). Pogs is slightly faster with Kubuntu. In general, Native Boinc has less overhead which makes crunching a little bit faster than using the Berkely Boinc. |
Send message Joined: 4 Apr 14 Posts: 23 Credit: 197,760 RAC: 0 |
@ alexander “The Images for two Android versions and 2 Linux versions are free downloadable from the odroid forum. There is also a cyanogen image available for that device. This means, you can load a vnc server and run the device without keyboard, mouse or monitor via you pc.” That’s good to know, especially if there are many devises. ”In general, Native Boinc has less overhead which makes crunching a little bit faster than using the Berkely Boinc.” That’s even better to know. Perhaps I need to try WCG’s native BOINC to see if it performs better on CEP2. @ Andras Earlier you wrote that your “strong pc” did 100 wu in 42 hours and needed 172,5 Watts during the crunching. Which of your computers was that? |
Send message Joined: 7 Dec 12 Posts: 87 Credit: 3,230,840 RAC: 84 |
"Earlier you wrote that your “strong pc” did 100 wu in 42 hours and needed 172,5 Watts during the crunching. Which of your computers was that?" The one with the Intel Core i7 2600 CPU (4 cores + hyper threading) and AMD Radeon GPU. I made some measurements lately. It seems that the Intel CPUs from the last few years add about 75W usage when under full load compared to when doing nothing. And the rest of the desktop computers (when doing nothing) seem to use another 70-90W. |
Send message Joined: 4 Apr 14 Posts: 23 Credit: 197,760 RAC: 0 |
|
Send message Joined: 7 Dec 12 Posts: 87 Credit: 3,230,840 RAC: 84 |
|
Send message Joined: 4 Apr 14 Posts: 23 Credit: 197,760 RAC: 0 |
@ Andras I’m trying to sort out what approach is best for me and I don’t know if this is right, so if anybody can point out errors I’d be grateful. I’m using Andras numbers. The cheapest similarly* configured computer to match your Core i7 2600 computer, which by now is difficult to find in stores, would maybe be an i7-4770. This one cost today, in Sweden, about SEK6000, which is about €663 (EUR/SEK rate: 9,05). You wrote earlier that you needed 24 USB sticks to match the performance of this computer – 6 sticks per cluster and 4 clusters to crunch 96 wu in 42 hours, by this time the i7 did 100 wu, but I will view this as equal results – which comes down to 24 x $60 x 1,2 (added VAT and toll tariffs) = $1728. This is about €1267 (€/$ rate: 1,3641). So, you are right, the initial cost, the entry price of the altruistic investment, is lower for X86, about 48% lower, but over time that edge is lost because of the lower Watt/wu that ARM SoC has. According to this calculation by [TA]Assimilator1, which is based on your measurement ... “So your PC does ~57 tasks/day whilst the ARM cluster does ~55/day (no need to round up :P), which with A@H's bizarre fixed credit rate per task regardless of it's length (compared with the same h/w) makes it easy to work out credit/w. So taking the average power draw of ~173w for the PC & 130w for the ARM cluster that's 3.035w/credit for the PC & 2.363w/credit for the ARM cluster, yep it's much more efficient! Forgot to say at the start btw, cool project :).” ... the difference in Watt/wu is 22%, in favor of ARM. I had the misfortune of locking in the price of my electricity at the very high price of €0,24/kWh (until January 2016). This means I would have to run your ARM clusters for a certain period before I could break even with the lower purchasing price of the X86 computer, i7-4770: (€1267 - €663) / €0,24/kWh x (0,173kW - 0,130kW) = 58527 hours, or 2439 days, or 6 years and 8 months (24 hours a day). Since it’s reasonable to assume that the USB sticks will not last 6 years and 8 months – at least not in my opinion, considering the high temperatures they build up inside the cramped casings and any way, who wants to torture a 7 year old, tired to the siliconbone, computer, which has most of its surviving transistor gates suffering from a leaking bladder – it would lead me to the conclusion that this socket X86 is better than this ARM SoCs, although I admit this is a subjective call. This, of course, provided that I haven’t made any wrong calculation. In reality it would take even longer to break even with the X86, since I’m comparing the energy consumption of an older computer with new android SoC USB sticks. Core i7 2600 TDP is 95W and Core i7-4770 is 84W. This difference is also reflected, to some degree, in the motherboard and perhaps also in the memory modules. So lets assume that the correlation between the lowered CPU TDP transcends over to other components so that the whole X86 computers energy consumption is lowered by a third of the TDP percentage difference. (1 - (84 / 95)) x 0,33 = 0,0382% (€1267 - €663) / €0,24/kWh x (0,173kW x (1 - 0,0382) - 0,130kW) = 69156 hours, or 2881 days, or 7 years and 11 months (24 hours a day). I’ll stop now. TDP values can’t be trusted, since they’re often set from a marketing perspective. Also, when my electricity price goes down, in 2016, the breakeven point will be moved even further into the future. Maybe it’s possible to improve the X86 energy efficiency (wu/Watt) if it’s a SoC, like this one: http://www.asus.com/uk/EeeBox_PCs/EB1036/overview/ And it’s cheap. At €229 it’s possible to buy 3 of these for the price of one Core i7-4770 PC: http://www.idealo.de/preisvergleich/OffersOfProduct/4290877_-eee-box-eb1036-b0080-90px0041-m00140-asus.html It has a 40 Watt power adapter and it radiates 6.695 W when idle, which is less than 10% of a Core i7-4770 PC (guesstimating). The biggest weakness, as I see it, is that it only has one memory channel, I’m guessing, this despite the CPU and chipset supporting dual. http://ark.intel.com/sv/products/78867/Intel-Celeron-Processor-J1900-2M-Cache-up-to-2_42-GHz http://www.intel.com/content/www/us/en/chipsets/value-chipsets/mobile-chipset-hm70.html : ( So after all this I’ll still buy the Q7 android, simply because I’m curious. I’ll probably also buy one ASUS EB1039, just to compare, and at some point in the future, maybe 2015Q3, when Intel shrinks desktop CPUs to 14nm, I’ll buy one i7. BUY, BUY, BUY! I’m realizing that, in a time when you can have all the everyday computing you need in a pocket, BOINC is a great excuse to buy new, unnecessary computers. *http://www.prisjakt.nu/kategori.php?l=s174605671&o=produkt_pris_inkmoms#rparams=l=s174629764 http://ark.intel.com/products/75122/Intel-Core-i7-4770-Processor-8M-Cache-up-to-3_90-GHz http://ark.intel.com/products/52213/Intel-Core-i7-2600-Processor-8M-Cache-up-to-3_80-GHz |
Send message Joined: 28 Apr 13 Posts: 87 Credit: 26,717,796 RAC: 102 |
@TBMS: You asked for comments if .. and I think you forgot something: upcoming developements. But step by step. First: a Intel PC. If you asseble your pc yourself, you can choose the parts. By try and error and from discussions at other projects I know, that faster ram decreases runtimes. So by adding faster ram (price diff between 1600 and 2100MHz ram is ~15€) you could increase the crunching speed by 10%, depending of the project and the code they use. And developement goes on. As I've seen on my A10, Crunch3r's code is 20% faster. Second: there might be an Android app using NEON. NEON increased the speed @ Einstein fron 12 hrs to 8 hrs. Both possible developments change your calculation. Third: I assume the sticks communicate via Wlan. Count the number of devices and include your tablets, notebooks and phones; I'm pretty shure that you need a different Access point that can handle this. Low cost devices can not. Add this price to your calculation. Fourth: Arm hardware. We are in a stage of very rapid development. The first devices with 2.3 GHz Arm cpu's are in the stores, development boards with arm cpu's running 64 bit code are available, Arm cpu's with built in nVidia Kepler hardware are available http://www.conrad.at/ce/de/product/1179886/Mainboard-Zotac-ZT-JSTK1-10L?ref=list and can run cuda code. There are open CL libraries for the MALI hw available which would allow open CL apps to run on arm's. I mean, this is not the best time to build up farms. As I have learned @ Einstein, the openCL app for the AMD cards runs without modification on the Intel GPU; just because of the long runtimes they made a special, short, wu for Intel HW. And the name of this thread: Support for ATI GPU's - such an app might run on an Intel onboard GPU as well ... Fifth: the most important factor is FUN. Watching a farm of devices doing their job is .. I prefer some experimenting, updating devices, trying differnt setups, OS's, different projects, different experiences. But this is very personal, so this decision can only be done by yourself. Always look at the BYTE side of life ! Cheers, Alexander PS: https://dl.dropboxusercontent.com/u/50246791/Bierwerbung.jpg |
Send message Joined: 4 Apr 14 Posts: 23 Credit: 197,760 RAC: 0 |
@ Alexander Thanks for the input. I also forgot to include the interest on the part of the price of ARM devises that exceed the price of the X86. This will move the breakeven point even further into the future, although not much in today’s low interest rate economy which is constantly importing deflation from Asia. I must say that I’m surprised, I really thought ARM SoCs where more cost effective than X86 sockets. Maybe they’ll be one day when smart phones become a commodity in developing countries, but we’re not there yet. I have a steep learning curve, since I’ve only used X86 and Windows. Even my tablet is X86 Windows and I use it mostly for e-books. It’s been convenient to remain in Microsoft’s garden, but I suppose I ought to broaden my horizon. So in that spirit I followed your previous link to Hardkernel and saw: http://hardkernel.com/main/products/prdt_info.php?g_code=G138503207322 Do you know if it’s possible to crunch on all 8 cores simultaneously on this odroid? Your latest link to Mainboard Zotac ZT-JSTK1-10L, could, apart from the Nvidia graphics, be much the same story as with the ARM sticks, high upfront cost that are never recouped. And that leads me to the next question: how do graphic cards stand up against X86s from a purchase price and energy efficiency perspective? Perhaps it makes more sense to keep the old computers and to install new, expensive graphic cards instead? This is all very confusing since I want to have maximum crunch for my buck, but at the same time there are so many choices. You mentioned faster RAM as a way to speed up things. Another way to increase crunching speed is to install a RAM-drive, at least for some projects. I’ll try that for CEP2. Their time limit of 18 hours has meant that my wus have timed out at 60%, far from completing the task. I tried a few wus but ended up aborting them. I felt if I can’t finish them then I might just as well drop them. I hope it will work, because their research is important. Finding the semiconductor replacement for silicon, in a time when we are fast approaching peak-all-fossil-fuel, will not only enable more efficient solar cells, but maybe also prevent a technological plateau after 2025 when the shrinkage of ICs in silicon hits a wall. Sure, a few years here or there, but we will see it during our lifetime. Both events. Stealthy, and with profound consequences. 4 GB RAM-Disk for free (at the bottom of the page) - Dataram http://memory.dataram.com/products-and-services/software/ramdisk/ http://blog.laptopmag.com/faster-than-an-ssd-how-to-turn-extra-memory-into-a-ram-disk Also, I’ve been meaning to ask if any of you are running antivirus on your dedicated crunching computers? I don’t have any dedicated machine yet, but will soon. |
Send message Joined: 28 Apr 13 Posts: 87 Credit: 26,717,796 RAC: 102 |
Unfortunately not, the big.LITTLE technology is a either-or technology. Maybe upcoming devices will be able, this one not.
The thing with graphic cards: It highly depends on the project. Milkyway has a gain factor up to 50 when running on AMD cards, much less when using nVidia. Einstein has a average gain of 8 for both AMD and nVidia, Asteroids is a special theme, the gain is below 2 which makes it senseless using graphic cards from the view of price and power consumption. And there is at least one project that has no cpu-app: Gpugrid, which is a nVidia only project. Looking backward, the first gpu app for MW came from a private person, not from the project dev's, the fma4 app here comes from a private person, Seti and Collatz have excellent apps from private programmers, so I hope to see a openCL app here, which might also be fit for the Intel HD graphics ... ( Very best greetings to Crunch3r !! :-))) )
My Odroid is my only dedicated cruncher, all other devices are 'Multipurpose' devices. All windows installations run with antivir software. One word to the Op Sys: some projects run faster under Linux. Simap is such a project. But this must be tested. Project by project and app by app. Alexander |
Send message Joined: 7 Dec 12 Posts: 87 Credit: 3,230,840 RAC: 84 |
Hello Guys! I'm on vacation, writing from my phone, I'll have to keep it short. The calculations seem right. It's a very long time before you can recover the investment difference. At the time I wrote about how the ARM farm is more efficient I thought I can buy them for half that price, but now I know it was a scam. I agree with everything Alexander replied. And yes, it's most of all about fun :) For me, watching an ARM farm crunch is fun, but then again others prefer a nicely shaped case with x86 hardware in it :) Hardware, as Alexander said, is developing fast. Both ARM and x86. Whichever you buy, it will become obsolete in 2-3 years. And you'll crave newer and fastet hardware. Also, the best optimization could come not from hardware, but from the BOINC projects codes. No matter how good the algorithm is, I bet it can still be improved by focusing to keep data in CPU cache instead of reaching out to RAM, which is incredibly slower. Or optimizing for specializes instruction sets in different CPUs, etc. |
Send message Joined: 4 Apr 14 Posts: 23 Credit: 197,760 RAC: 0 |
@ Andras Don’t eat before swimming, or you‘ll sink : ) “For me, watching an ARM farm crunch is fun, but then again others prefer a nicely shaped case with x86 hardware in it :)” I, too, would like to have a farm, but why choose? The Asus I pointed out earlier is X86 and has a ”nicely shaped case”, and is cheap enough to be farmable. What I like most about it is that it has a passive cooling. I don’t like listening all day to a hissing sound. These hissing sounds tend to get worse as computers age. Sometimes even new computers have a high noise level, like one laptop I bought which I had to give away after a few months, simply because it gave me a headache. @ Alexander ”Milkyway has a gain factor up to 50 when running on AMD cards, much less when using nVidia.” This is a good number. I whish the project leaders could be more open about these numbers so that we could allocate the hardware in a more efficient way. ”Einstein has a average gain of 8 for both AMD and nVidia, Asteroids is a special theme, the gain is below 2 which makes it senseless using graphic cards from the view of price and power consumption.” I think that A@H should discontinue their support for Nvidia graphic cards since the app’s energy efficiency is so low, or at least post a warning sign, in red letters, on the front page. I wonder how many Watts/wu it’s grilling? One alternative is to advise crunchers to only use it during winter and only if they have electric heating i the house. That way we can turn of the radiator in the room where the computer is. “Looking backward, the first gpu app for MW came from a private person, not from the project dev's, the fma4 app here comes from a private person, Seti and Collatz have excellent apps from private programmers//” Maybe someone could contact whoever wrote the Milkyway app and ask him if he could do the same for A@H? Do you know what the gain is for Seti and Collatz, and all the other projects out there – not that I’ll ever waist a single nWh on the Seti project. I’m sad to see SIMAP ending their project later this year. I just recently began to crunch and I saw their project as one of the more important among the many different BOINC projects. I’ve already bailed. It’s no fun crunching a dead end task. |
Send message Joined: 7 Dec 12 Posts: 87 Credit: 3,230,840 RAC: 84 |
|
Send message Joined: 28 Apr 13 Posts: 87 Credit: 26,717,796 RAC: 102 |
Good morning, first MW: the project's algorithm fits 1:1 in the hw structure from AMD, this is the reason for the high gain and cannot be made a global rule. This seems to be more a 'once and never again' situation. second Seti and Collatz: Sorry, I have no numbers; I join these projects only from time to time for one or two days, just not to forget them. They are not the primary target for my interests. third: waist a single nWh .. I mean, each and every project has its right to exist and a right to be supported. One can never say what the results show in the future. There are private persons working very hard in developing and optimizing the code, porting them to different platforms. Watching the discussions @ Seti, one gets a feeling of the fun and the enthusiasm of these people, the best ingrediences for a good running project; the feeling of being part of it. One may or may not share the idea of ever being able to find ET's, but this is realy up to everyones own personal experiences. fourth: Simap, A@H and private programmers. Simap's results are used by scientist regular and frequently, it's a free database driven by a university. Crunching for this project means to be part of a alive and useful science. No reason to leave it just because they found a way to reorganize themself to be more efficient. The database will be used anyway! I hope, that the A@H database is also frequently used by scientists. Repeating a post from Bernd Machenschalk (Einstein@Home): there are only a few hundred people worldwide able to write highly optimized apps for gpu's. One needs to keep in mind, that a gpu app must run under different OS's, different driver versions and different hw generations. Writing an app like this is nothing one can do over a weekend. Its easy to find these guys; look at projects where you have the feeling of life and enthusiasm (Seti has sometimes up to 100 post/day), have a good contact to the dev's and mod's, get frequently updates of the results without asking everytime for it aso. Everyone can contact Crunch3r, Slicker or Raistmer (just to post three names) by PM; maybe they will answer you! But be aware of what you are asking for! Cheers Alexander |
Send message Joined: 4 Apr 14 Posts: 23 Credit: 197,760 RAC: 0 |
I actually did a few days of crunching for SETI in 2001, or was it 2002, but lost interest almost immediately and didn’t return until I joined A@H, two moths ago. It seems I have different priorities today, compared to those I had back then. Also, we have many more choices today. You are of course right about SIMAP, and they haven’t done anything wrong. I wish them good luck, but that doesn’t mean I’m going back. It’s not fun anymore. “//the best ingrediences for a good running project; the feeling of being part of it.” I agree. That’s way I posted a request a few days ago, asking if admin could clarify the science results page by numbering and dating the asteroids that we crunched. It’s difficult to se what progress we are making just by looking at a long data list. A chronologic listing would be much appreciated. If it’s not to much to ask for. http://asteroidsathome.net/boinc/forum_thread.php?id=314&postid=3142 |
Send message Joined: 1 Jan 14 Posts: 302 Credit: 32,671,868 RAC: 0 |
Good morning, Rosetta looked into going the gpu route at one time too, they decided it wasn't worth the effort as they couldn't figure out how to scale down their models to fit into an ordinary modern day gpu. Their model just isn't designed to work that way, so they stay cpu only. The basic problem with a gpu is that it can do a few things extremely well, while a cpu can do a TON of things pretty well, so if you project can utilize a gpu then you are golden, but if not you are up a creek with no paddle. Trying to make a gpu work outside of it's 'golden zone' and get good results too is like trying to train a monkey to drive a car, it's possible but you may not want to be a passenger in his car! And if the results can't be trusted you might as well not waste your time, no one will ever use your data for anything. As for Seti even if they never find anything, they have proved that they searched using those parameters and didn't find anything. Negative results do not work for Masters and Doctoral thesis', but they do work in the outside World. One more check box saying 'nope not there', means fewer things to do for the next person. I used to crunch Seti many years ago but left for my own reasons, I still like the idea though! |
Message boards :
Wish list :
Support for ATI Radeon GPUs