The Intel Arc A750 in 2023 – Can it Compete?

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
the Intel Arc a750 is a performance segment graphics card launched in October of 2022. being part of the debut of Intel's high-end Alchemist cards the a750 was slated to compete with the Nvidia RTX 3060 and despite some issues we'll discuss in this video they got pretty close given this is their first generation of discrete graphics Solutions thank you foreign before we get into this video I'd like to say not to forget to leave a comment if there's something you think I missed in this video whether it be technical discrepancies or your own personal experience I'm interested in what you have to say about this card and I can't wait to see what the general opinion seems to be without any more to preface let's dive into the arc a750 and determine its value and positioning in the current somewhat competitive GPU Market right off the bat the rk750 is an attractive graphics card that does away with fancy RGB lighting and angular shrouds in favor of a more Nvidia Founders Edition type of style aesthetically speaking everything is either matte black or reflective Chrome and what's weird to feel on an electronic is the rubber coating on the bottom of the Shroud surrounding the fans it doesn't feel cheap and although the Shroud and backplate are made of plastic it just looks visually pleasing and feels premium in your hand the back plate also features the model name and brand of the card and there's a white illuminated Intel Arc logo on the side of it the a750 doesn't light up like the a770 does but I think that what's presented here is pretty tasteful in the same vein as my prior RTX 3070 TI review when it comes to heft it's a pretty heavy card but it seems to be because of the density of the heatsink as the card itself is physically unimposing and only two slots deep this combined with the relatively manageable 225 watt total board power means that the cooler on the car has more than enough grunt to keep the GPU die cool quiet and clock speeds High just to give you an idea as to how the heatsink performs it never hit or went above 80 Celsius during any game sessions when the card is pegged at 100 usage with Ray tracing on temperatures settled at around 73 Celsius but in the worst case scenario peaked at around 78. this was achieved with the two fans set to their stock settings and during use never became audibly distracting you can definitely hear it but it's not loud like I would say my 3070 TI can get sometimes this especially when compared to nvidia's older 10 and 20 series Founders edition flavors is significantly improved over the constant 83c temperatures and resulting clock throttling not to mention they sound like my guinea pigs when I'm feeding them when it comes to power requirements the cart's 225 watt tbp is actually worse than the RTX 3070 and 3060 but I will say that Intel's metric measures the power draw of the entire graphics card whereas Nvidia only measures the power of their GPU core I can tell this because not only does the core power of the arc a750 never really go above 190 Watts but it usually hangs out at around 140 Watts even when I give it the ability to draw more power this flavor features an 8 plus 6 pin pcie power connector which while thankfully not featuring one of the new 12 or 16 pin pcie 5.0 connectors is somewhat restrictive if you're looking to tune this card one software like MSI afterburner finally catches up and enables support for these gpus for the specifications and performance presented I think the power draw of this card is a little bit high for what's there but it's not ridiculous like some other cards on the market it will allow for this card to run on more inexpensive power supplies and to fit into a wider variety of systems to communicate and transfer data to the rest of your PC the a750 comes equipped with the 16 Lane pcie 4.0 connector and controller capable of 32 gigabytes per second of bi-directional data transfer while this card isn't pcie 5.0 capable like Intel's own Alder Lake CPUs the bandwidth on tap is more than enough to fully saturate the onboard GPU in memory and even with resizable bar you'll be more than fine running pcie 4.0 speaking of if you're looking at picking up one of these cards be aware that it basically requires resizable bar to get performance that's anywhere close to playable I didn't take benchmarks with resizable bar off but if you want me to make a dedicated video on it I'd be more than happy to because it's something I think is very important with The Alchemist lineup of cards with it off though just know that the overall average performance is depressed but the one percent lows are affected most severely things just become a stuttery mess so make sure the setting is turned on in your motherboard settings I'd also note that it will work best with a PCI 4.0 capable CPU and motherboard I believe the z390 motherboard I've got technically supports resizable bar on 9th gen chips but since those are limited to pcie gen 3 you'll run into performance bottlenecks stick to Intel rocket Lake in LGA 1700 platforms for the best stability and performance it kind of stinks that this card is more or less limited to newer Intel and AMD systems as this can just outright kill a lot of people's prospects for picking up one of these cards in terms of display connectors the a750 features three DisplayPort 2.0 along with a single HDMI 2.1 for driving my 4k monitor these ports work perfectly and would allow me to upgrade to more displays if I wanted to in the future this DisplayPort setup is also Superior to what the 4090 sports but that card only rocking DisplayPort 1.4 a connectors what this means in Practical terms is that you'll be able to push higher refresh or higher resolution monitors with the a750 that's not to say you'll be able to render games like that but the card is physically capable of doing so whereas Nvidia cards are basically stuck half a generation behind AMD though is half a generation ahead of Intel as their RX 7900 family are shipping with HDMI 2.1a and DisplayPort 2.1 once again upping the available bandwidth to your display however for the resolutions this card is targeted at the display connectors support what's needed quite well so I really have no complaints for a first look the card seems to have support for modern features and seems like a solid first attempt from Intel however there are two other aspects of the story that are important to bring up the die specifications and the performance let's dig into the die being presented and take a look at how the performance we'll be discussing is being achieved to start the arc a750 features a cut down 6 nanometer Z high performance graphic space dg2 512 GPU die the specific core while not technically the best that Intel offers provides all the functionality found in the fully unlocked a770 as well as an identical memory interface Intel Architects its gpus a little differently from Nvidia though and they're actually more similar to how AMD designs their chips in terms of Hardware layout where Nvidia designs through gpus to function more as Banks of individual cores Intel builds their gpus more like simd co-processors built to parallelize a single operation on a chunk of data at a time in fact when looking at the Block diagrams of z-cores which build up the largest unit in an Intel Vector GPU render slices they're made up of subunits called Vector engines which handle 256-bit packed operations this should sound similar to avx2 this Canon theoretically be used to process either 16 floats 32 half Floats or 16 inch per clock cycle per Vector engine in total you have 448 of these Vector engines with the a750 however in the full die you get 512 of these execution units hence the dg2 512 code name from a top-down view we only get one render slice disabled bringing the total amount active down to 7 from the total 8 found in the fully unlocked die similarly to an Nvidia GPC or an AMD Shader engine a drop in render slices equates to less shaders tensor cores rasterization operation pipelines and texture mapping units this affects performance at almost all levels because you've got less Hardware to parallelize the different types of operations that are frequently offloaded onto a GPU mapping textures to Geometry that's affected rasterizing pixels to begin the post-processing graphics pipeline that's also affected and most of all if you're trying to shade the geometry that you just rendered and textured that's affected as well now I'm making it seem like this card is incapable of rendering 3D Graphics because a single render slice is disabled but think of it more like the difference between an 8 core and a 7 core CPU the seven core CPU will still be Beyond capable it'll just fall behind in the most intensive of scenarios in total the card still has 3584 fp32 data paths clocked up to 2400 megahertz in reality Clocks Were basically stuck there when under load so I'd expect to get similar clocks from other cards as this is what Intel advertises publicly along with the base shaders you get one Ray tracing core per Z core as well as 16 Intel tensor core equivalents what they call xmx cores which handle Matrix operations they're designed to offload joint underscore Matrix computations under 10 24-bit wide fpus similar to how AVX 512 operates you first have to do what two accelerated loads and accelerated arithmetic operation and then an accelerated store this can speed things up dramatically as instead of issuing hundreds of instructions to process two 8x8 matrices you can Issue four accelerated instructions and free up your vector Shader cores for other compute tasks it's a cool example of convergent design from Intel and Nvidia and firmly puts this card ahead of what AMD offers in terms of raw feature set when it comes to rendering the GPU comes in with 224 tmus and 112 drops making this card 100 capable of rendering and texturing 1080 and 1440p games whether the games will perform that well is another story but this configuration leads to a total of just under 538 gigatexels per second and 269 gigapixels per second respectively this absolutely annihilates the previously reviewed Titan X Pascal and RTX 3070 ti so why this card isn't performing like said cards is a bit puzzling let's dig deeper into the memory configuration to try and attain some Clues as to what could be going on here the memory configuration found in the a750 is eerily similar to that found in amd's rdna 2 cards featuring 8 by 8 gigabit gddr6 memory dies the card runs them off a 256-bit controller clocked at 16 gigabit per second this equates to 512 gigabytes per second of total memory bandwidth which is competitive with the RX 6900xt and beats out the RTX 3070 the card also comes with 16 megabytes of L2 cache putting it ahead of nvidia's couple of megabytes found in ampere but behind amd's Infinity cash bearing models and nvidia's recently released Ada line the cash will more so help to enhance the gddr6's memory bandwidth as it will allow the Z cores to Simply fetch data locally instead of having to travel outside of the Chip And to a separate integrated circuit the memory configuration doesn't tell us everything about how the card performs but it indicates that it will be strong and bandwidth heavy scientific workloads and will also be able to handle Alpha transparencies and games more easily as this card has more to work with the memory was also unfortunately unable to really be tuned with this is probably because the Telemetry information doesn't hook up to MSI afterburner properly and Intel's own utility is only really allowing you to change the power slider but there aren't any options that would allow you to adjust the vram clocks when setting the card to the maximum power draw though it didn't draw anywhere near 228 Watts on the core but interestingly it does allow for underpowering the card in the Intel driver it only allows you to raise the core voltage you can't drop it but what you can do is set the power limit to a lower Target if you want to see a video of me underpowering the card let me know in the comments because it's something I'm definitely interested in trying out just for my initial testing it seems to behave properly and draw most of the power away from the core lowering the core clock speeds primarily memory was consistently pegged at 16 gigabit per second so it doesn't seem like we have to worry about any of the weird rdna3 type memory throttling issues for other compute related features it's important to mention that all the art cards feature av1 and code and decode which is helpful for video rendering and recording it really helps with file sizes in turn meaning you can fit more videos onto your secondary storage and also kind of selfishly requiring less time to upload or download pieces of said content this card also supports Intel's deep Link Technology which leverages the integrated GPU in your Intel CPU to further accelerate basically anything there's first off the hyper compute feature which allows certain workloads to be split onto the igpu and discrete GPU there's also hyper in code which utilizes the encode decode engines in your CPU and GPU to accelerate frame rendering and applications such as handbrake and DaVinci Resolve I'm sure as time passes more software will integrate into Hardware acceleration but for now the programs that it does work with are rather impressive circling back around to the die featured in this card that being dg2512 It's a larger die in terms of square footage and transistor count than what's featured in the RTX 3070 TI not only is this GPU based on tsmc's n6 lithographic node making it by default more dense and efficient than Samsung's 8 nanometer when it comes to actual performance per watt it falls behind the RTX 3060. there's just a huge discrepancy between what should theoretically be possible and what's actually been achieved it's kind of ironic that Intel's marketing slides repeat the phrase no transistor Left Behind when it seems like over 4 billion of them are just dark the microarchitecture seems to be aimed for compute throughput but may need some optimizations for gaming workloads at least you do get DirectX 12 ultimate support though meaning that you'll be able to run the latest games with the latest settings turned on but this kind of highlights some of the different approaches you have to take when it comes to segmenting your graphics Solutions let's see how the card performs in games though just to get an idea as to where the card Falls in terms of relative performance to test the arc a750 we'll be using my personal system a mid-range build featuring the i7 11700k clock to 5 gigahertz all core 64 gigabytes of dual rank 3200 Mega transfer per second ddr4 and the PCI 4.0 capable gigabyte z590 Ultra durable motherboard I have a new z590 board coming in the mail but for this review the gigabyte board will perform just fine it will allow the graphics card to reach its maximum potential as the boot Drive we've got a Samsung 970 Evo 500 gigabyte and to store games we've got the PCI 4.0 capable Western Digital sn770 one terabyte this will allow everything in our system to operate at the maximum speeds and will eliminate secondary storage bottlenecks we'll be testing all our games at the medium settings and recorded results at 1080P 1440p and 4K just to see how the card scales with resolution let's dig into the benchmarks and see how this card performs starting off with a popular Source engine based game that's not known for being particularly demanding Apex Legends a battle royale game from 2019 performs pretty well but ultimately below where I was expecting with an average at 1080p of 106 FPS and a one percent low of 48. the game technically exhibited stutters but ultimately performed rather smoothly and I didn't notice any hitches during gameplay besides when in the air during the landing phase this game being built on DirectX 11 could possibly provide a hint as to why performance observed is lower than expected but It ultimately doesn't really matter since it's still playable scaling up to 1440p and the average and one percent low come in at 94 and 74 FPS respectively and finally 4K exhibiting playable but ultimately depressed performance as compared to 1080 and 1440p all resolutions were playable on this card but if you're looking for an ultra competitive experience then lowering the settings further would probably help gain you some sweet sweet frames our first Unreal Engine 4 title and Xbox one destroying Beast Ark survival evolved performed better than expected given this game is notoriously more demanding than Apex Legends at 1080p the average and one percent low of 128 and 106 FPS hint that this game is still very playable with very little stutter at these settings even though this game is also built on DirectX 11 it still performs pretty well hinting that there may be some subroutine optimizations that this card can take advantage of or it could mean that this card performs inconsistently between games on the same API either way it seems to be a fluke and not something that's been done intentionally moving up to 1440p and the game was still playable with a respective average and one percent low of 111 and 96 FPS 4K was also playable with one percent lows coming in at 64 FPS and the average at 75. would I personally play this game at 4K probably not I'd rather play the game at 1080p with higher quality visuals and that'll lack 60fps but either way this card can provide you the opportunity to get into some very light 4K gaming with Arc up next is a frostbite engine game and despite it being built on DirectX 12 it doesn't perform all that well Battlefield 2042 a notoriously difficult to run game at higher settings actually performs pretty well at 1080p but falls off pretty hard at 1440p and 4K the average and one percent lows start off promising a 1080p with 111 and 71 FPS respectively the 1440p figures don't seem to properly convey that this game was stuttering it got significantly worse at 4K but I'm not sure what's causing this it could just be a lack of compute performance or maybe this game is Just poorly optimized either way if you're eyeing an arc a750 stick to 1080p in this game because you probably won't have that great of a time at a resolution's any higher it's unfortunate but could also be slightly remedied by turning down the settings to low or enabling the dynamic resolution scaler which would help to stabilize your frame times I wouldn't recommend picking this card up if you're wanting to primarily play Battlefield 2042 but just to see if it was a frostbite engine issue I also ran benchmarks on the much older Battlefield 4. the results were significantly improved with the averages coming in at the engine frame rate limit at 1080P and 1440p and then averaging 144 FPS at 4K making it high refresh rate playable the performance issues experienced by Battlefield 2042 are definitely just issues with that game and if you're looking to experience the older Battlefield games it would be a blast our next title is another source engine game however this time it's based on DirectX 9. CS go a game that's been popular since I was in Middle School actually received a doubling of performance in Intel's December 3rd driver update however given that my 3070 TI maxed out my 11 700k at over 300 FPS I think it's safe to assume that the a750 is with bottlenecking the system the performance also scaled with resolution once again indicating a GPU bottleneck as opposed to a CPU side one the average at 1080 in 1440p were 256 and 242 FPS respectively and at 4K the average dipped down to 151. the maximum frame rate scaled the most prominently between the resolutions however the one percent lows being relatively flat indicate that there was very little stuttering going on this game wallet started off on a kind of Rocky footing has improved dramatically with driver updates proving that Intel is capable of reworking their software cyberpunk 2077 another game built on DirectX 12 performed rather poorly however right around where I expected it to at 1080p it was the most playable with average and one percent lows coming in at 77 and 53 FPS this performance indicates minor stuttering but during gameplay it wasn't all that noticeable moving up to 1440p and the average hung at 60fps but the one percent lows came in at 45 FPS which isn't terrible and is still playable 4K though and I'm not sure why I just felt sluggish which is strange given I've noticed that this game usually feels pretty good at 30 FPS the one percent lows came in at 30 with the average at 37 FPS meaning it should feel fine to play but it felt like my movements were super slow and sort of delayed this is part of the performance profile that I'm kind of stumped by because it doesn't exhibit this behavior on older Nvidia cards that run the game much more poorly it was strange but if you want to play some cyberpunk on the a750 I would stick to 1440 or 1080p as you won't experience any of the weird control issues up next is our first idtech 7 based game and I have to say that even at the medium settings everything looks amazing do maternal written using the Vulcan API performed excellently and was playable at all resolutions with a one percent low an average of 137 and 175 FPS at 1080p the a750 can run this game very well and moving up to 1440p the card exhibits a higher maximum than at 1080P and achieves an average and one percent low of 136 and 96 FPS respectively at 4K the card returned an average of 69 FPS and a 1 low of 53 making this game still playable even at the higher resolutions I have to say that if anything I'm very impressed with how this game performs and however idtech handles Alpha transparencies allows them to spam them and not hit performance too hard whether it be blood spraying everywhere from your chainsaw or an explosion on the back of a spider brain thing it will look great and will perform well at the resolutions tested today Grand Theft Auto 5 built on Rockstar's proprietary rage engine and in this case utilizing the DirectX 11 API performed playably but there were some issues most notably no matter how far I drop the settings the game would not push above an average of about 100 FPS I'm not sure if there was an issue with this game or if it's the API being utilized here but Red Dead Redemption 2 doesn't have this issue either way GTA 5 at 1080p performed pretty well with an average and one percent low of 95 and 60fps at 1440p bringing that down to just 90 and 59 FPS respectively 4K saw a large performance drop with the average coming in at 49 FPS and the one percent lows at 34. I do think it's important to mention how the performance didn't improve by a significant amount by dropping the resolution from 1440p to 1080p so there's definitely something going on with this game I'm just unable to pinpoint exactly what is going wrong either way if you want to play the game locked at 60fps at 1440p you'd be able to and it would be a more than enjoyable experience Halo infinite built on the DirectX 12 API performed well and exhibited proper scaling going from 1440 to 1080p average and one percent low of 97 and 39 FPS the a750 was able to play this game comfortably but there were some stutters this game even though my 3070 TI exhibits stuttering so it is definitely not unique to this card However the fact that it's there is still kind of noticeable at 1440p the average and one percent lows went down to 73 and 32 FPS respectively which while worse than the 1080p results by a long shot was still playable for the most part 4K though especially for a competitive first person shooter wasn't really playable with the card never hitting the bare minimum 60fps that many of us require from our systems if you're looking to pick up this card to play The Master Chief Collection though the games never dropped below 60fps at 1440 and 1080p but exhibited some stuttering at 4K similar to what infinite exhibits it's kind of disappointing that these older games can't run particularly well on this card but if you're looking to get into them then the AMD RX 6600 would be an awesome alternative for a similar amount of money Minecraft RTX built on their proprietary DirectX 12 based engine was able to run on this card but it didn't run particularly well even at 1080p the card averages 50 FPS and the one percent lows came in at 36 indicating that the game was stuttering this was technically playable however I'd rather just play this game with shaders as it will run much better at 1440p and 4K the performance wasn't playable either but the fact that it can run this feature this well and it not being a literal slideshow as impressive as it shows the strength of the zrt cores and how they are catching up to Nvidia once again I'd rather play this game vanilla or with shaders because the RTX performance is just too slow up next is another DirectX 12 title and it's built on the proprietary IW 9.0 engine Modern Warfare 2 a game that has been historically sold with Arc gpus performs all right at 1080p with an average and one percent low of 75 and 40 FPS respectively the one percent lows indicates some stutter though but in reality this was isolated to the first few seconds after spawning after that the performance hung out much higher 1440p was also sort of playable but the stuttering became much more pronounced and occurred outside the initial spawn window 4K was honestly not playable for a competitive shooter and even lowering the settings down to load didn't seem to affect performance all that much at 1440p in 4k so keep that in mind if you're looking to pick one of these cards up I'd keep it restricted to 1080p in this game and probably lower some of the settings further just to squeeze a few more FPS out of this card up next is a DirectX 11 title OverWatch 2. and despite the older API it performed very well at all the resolutions tested at 1080p the card delivers an average of 191 FPS and a one percent low of 86 indicating that the game remains Beyond playable at almost all times moving into 1440p and the game returned an average of 150 FPS and a one percent low of 414. this also remains playable and although the average and maximum performance is overall depressed the one percent lows improved dramatically by jumping up to 1440p at 4K the game is still playable with an average and one percent low of 75 and 53 FPS respectively but like most of the other games we've discussed I'd stick to 1440 or 1080ps that performs much more smoothly and doesn't run into much unplayable performance this game also seems to just run well on anything so keep that in mind if you're looking to pick up an a750 to get into OverWatch our next game is another Unreal Engine 4 title however this time utilizing the DirectX 12 API playerunknown's Battlegrounds has been popular for years now and the performance on this card reflects its relative popularity and subsequent effort to optimize at 1080p the average and one percent lows came in at 182 and 117 FPS respectively and dropped to 135 and 101 FPS at 1440p 4K remains surprisingly playable as well with the average and one percent lows clocking in at 72 and 58 FPS it seems like utilizing DirectX 12 really helps pubg's performance on this card if you're looking to pick up an a750 it might be worth looking into lowering the settings further and then rocking the DirectX 12 renderer it seems to help performance a lot and I can't recommend using it enough this card seems to be well suited to play this game up to 4K but to be safe I'd be most comfortable recommending for 1440p and below Rainbow Six Siege built on the Vulcan API performed incredibly well at all resolutions this makes sense as this game is now 7 years old and also utilizes a moderate renderer making it a perfect candidate for running well on the a750 at 1080p the card achieved an average of 242 FPS and a one percent low of 204 making this card Beyond High refresh rate competitive at 1440p The a750 Returned an average and one percent low of 162 and 142 FPS respectively and 4K continues the strong Trend with an average and one percent low of 81 and 72 making it once again Beyond playable I would personally stick to 1440p or 4K to get the ultra competitive performance figures but if you want to crank up the resolution to see the highest Fidelity of your enemies at a distance then the card is happy to oblige this game being older definitely helps it to perform well and on the a750 it performs like a champ our next rage title Red Dead Redemption 2 performed better than GTA 5 on average but exhibited similar performance Trends overall the FPS scaling experienced was also more pronounced when dropping the resolution to 1080p with the average and one percent lows coming up to 107 and 60 FPS up from the 91 67 FPS recorded at 1440p 4K was also still playable with the average and one percent low coming in at 54 and 43 FPS respectively but when it came down to it it felt smooth to play it only became more smooth as the resolution decreased this makes sense as the frame rates increased but the sweet spot for this game seems to be 1440p as you get improved visuals with an improved performance level that's still playable if you want to get into Red Dead on the a750 then it's technically possible but just be aware that there are also some FSR 2.0 bugs in this game it's really not a huge deal as the game performs fine natively but if you get on the game on this card and flip on FSR only to be greeted by a progressively darkening screen that's what's going on up next is another DirectX 12 title built on the IW 9.0 engine like modern warfare 2. Warzone 2.0 the follow-up to the immensely popular war zone runs almost identically to the aforementioned MW2 but does have some lower one percent lows probably due to more players being in each game as opposed to a traditional multiplayer match but the averages are nearly identical showing the results are pretty repeatable on this engine with an average of 1080p of 74 FPS it's playable with the 1 lows of 34 indication of stuttering this was primarily due to the landing phase similar to Apex Legends and after that the game ran smoothly without stutters 1440p was a mostly similar story however this time with an average and one percent low of 62 and 28 FPS respectively 4K was basically unplayable for a competitive shooter so if you want a rock Warzone 2 or Modern Warfare 2 on the a750 be prepared to stick to primarily 1080P and don't be afraid to lower the settings our last game Wolfenstein the New Colossus built on the idtech 6 engine the same as 2016's Doom was included because of its inclusion of variable rate shading this game also being built on Vulcan just runs phenomenally well similarly to Rainbow Six Siege and doom Eternal at 1080p the average and one percent low came in at 238 and 150 FPS and at 1440p the 183 FPS average and 110 FPS one percent low proves that the a750 is capable of rendering games at high frame rates at 4K while the performance was playable it wasn't higher refresh playable with an average and one percent low of 98 and 51 FPS respectively the maximum May mislead a little as it looks incredibly high for 4K but in reality the performance was much closer to the 90 FPS mark either way the game running vrs combined with the a750s ability to Hardware accelerate it improves performance dramatically and I can't wait for its inclusion in other games and their engines the Intel Arc a750 is kind of like that underachieving friend you've got that while trying their best still gets C's on their assignments they've got the horsepower but their performance is just inconsistent like this friend the a750 has the transistor and power budget to make a very nice GPU that's beyond efficient and performs excellently however there's just a disconnect between what's theoretically possible and what's actually being presented with this Hardware and I think that's what's troubling a lot of people when someone hears that a graphics card has 17.2 teraflops of compute they expect a 3070 in terms of performance unfortunately what we get is a 3060 with some performance compromises in older games the software stack though specifically for drivers and other compute focused features are improving however the rate at which it's improving we have yet to really see we've gotten DirectX 9 fixes which improve performance a lot but we're unsure if DirectX 11 will get the same treatment if it does I'll definitely be covering it however the likelihood of such an update happening is pretty slim if you do GPU programming though then this card provides excellent cuda-like Alternatives through using sickle or opencl and while the learning curves are steeper than Cuda once it all compiles it's immensely satisfying the Intel 1 API is becoming better over time and since my last exposure to it in early 2021 things have improved a lot and the functionality is also easier to incorporate now it's basically as simple as Cuda in terms of setting up operations but the Syntax for parallel for Loops is a bit different and takes some time to fully unpack and fully learn what's going on there are tons of examples online ranging from projects to simple University lectures to help you pick up sickle and learn some of the capabilities of the tool it's an open source alternative to Cuda and may be worth investing your time into if you're a parallel programmer as for gaming performance The a750 Falls roughly in line with amd's RX 6600 however it's significantly more expensive and draws more power for the price I picked this card up at 215 dollars it's not an awful deal but for strictly gaming I'd rather have the RX 6600 it just performs more consistently on older and still very popular games and apis the 6600 overclocks better runs cooler and is overall the more efficient design in terms of performance per watt and silicon area however what this card is used for is for developers looking for NVIDIA software features without being locked into the Nvidia ecosystem with inclusion such as proper Ray tracing cores and xmx cores the card has the feature set of nvidia's cards with performance that seems to be getting better with time I wouldn't buy an a750 with the hopes of it receiving a massive performance uplift at some unspecified future date but it makes it an interesting piece of tech to keep an eye on for the everyday consumer I would just recommend either an RX 6600 or the 6600 XT over the arc a750 if you're looking to spend around 300 you lose the features I just discussed but for the majority of Gamers buying cheaper graphics cards they probably won't be running Ray tracing on demanding settings the fact that the cards are capable of it is nice but it's not a deal breaker if RT and or tensor performances hit hard at these lower price tiers it's also nice that the a750 has av1 encoded decode but once again for cheaper gpus this probably isn't a massive deal but for Content creators it is going to become more of a necessity moving forward it may not be adopted in the immediate future but once it is it'll save you a ton of time and bandwidth since the files are smaller and it'll require less time to upload the card featuring other content creation acceleration tools such as Intel's deep link also helps to boost creative application performance and in supported programs such as DaVinci Resolve it cut render Times by roughly 17 in my case it's a nice technology to include and makes this card more attractive for cost conscious creators who use supported applications as for the final wrap-up I think the entire Arc lineup has potential to offer great performance at a great price the hardware is there it's just a matter of taking advantage of it specifically the a750 offers decent 1080p performance at a price that it found for around the 220 dollar Mark is great but beyond that I'd recommend an RX 6600 or 6600 XT as they'll just perform better for the price for Niche use cases the card excels specifically in scientific workloads utilizing lower bit depth operations that can fit into the 16 megabytes of L2 cache would I recommend picking up an a750 in early 2023 well probably not as there are better options on the market but it doesn't blow up or emit ionizing radiation so it won't kill you if you install it into your system for developers this card is definitely worth looking into but if you're already used to Cuda then I just stick with the Nvidia cards I think that as time progresses this card will see more performance improvements and will eventually punch up to the RTX 3070 but in the here and now it's kind of a tough sell but it's one hell of a first attempt so thank you for watching and if you enjoyed don't forget to leave a like And subscribe and click the Bell icon so you'll be notified about all our future uploads let me know what you think of the rk750 from my experience it's a great first try but it has a lot of issues that need to be ground out before a second generation comes out either way stay tuned for some more a750 content thanks again for watching and I'll catch you in the next video thank you foreign [Music]
Info
Channel: Proceu Tech
Views: 14,095
Rating: undefined out of 5
Keywords: intel arc a750 review, intel arc a750 vs rtx 3060, intel arc a750 gpu, intel arc a770 gaming test, intel arc a770 review, intel arc a750 benchmark, intel arc a750, intel arc ray tracing, intel arc a770 16gb, intel arc alchemist, intel arc a770, intel arc performance, intel arc gpu, intel arc gpu benchmark, intel a750, intel graphics card, intel arc benchmark, arc, a750, arc a750, review, rtx 3070, rtx 3070 ti, 3070 ti, 3060 ti, arc a770, a750 vs 3060, 3060ti, intel arc
Id: ToxC99YP5tg
Channel Id: undefined
Length: 38min 10sec (2290 seconds)
Published: Sat Dec 31 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.