• imaginary_num6er@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    “I asked them is there a technical reason for why 12th and 13thgen Parts aren’t supported and if not will they be included in the future? their response to that question was as follows: Intel has no plans to support prior generations of products with application optimization. That’s a really garbage response to be perfectly blunt about it.”

    Yeah, let’s have people rush to upgrade to 14th gen when it already had questionable value to upgrade. This APO feature will die in obscurity since Intel will realize 14th gen is not being adopted and unless they want a repeat of XeSS, they will cut their losses and decide not to invest resources into a feature that barely anyone uses.

    • Put_It_All_On_Blck@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      unless they want a repeat of XeSS, they will cut their losses and decide not to invest resources into a feature that barely anyone uses.

      XeSS is in close to 100 games now, more users are using XeSS than people even own Arc GPUs, as it has better quality than FSR and works on AMD and Nvidia GPUs too. Also Intel has already marketed Meteor Lake + XeSS, which they are expecting around 100 million people to buy MTL in 2024.

      If anything XeSS has been the most successful part of Intels consumer GPU push.

      • AgeOk2348@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        as it has better quality than FSR

        *depending on the game. spiderman and hogwarts legacy for instance have much worse ghosting with xess than fsr. so its kinda useless for those.

    • Jesburger@alien.top
      cake
      B
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Not being adopted? Dell, HP, Lenovo, will slowly stop selling 13th gen and move on to 14th gen, like they do every year. Businesses will buy the computers with the biggest number gen, as they do. Gamers on reddit aren’t the huge market you may think it is for these companies.

    • soggybiscuit93@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Not if the game library keeps increasing and APO is supported on all future Intel CPUs.
      It really seems to be like a software optimization to better leverage E cores in gaming to improve performance. I don’t see how that feature is going to die as Intel seems to be committed to hybrid for the foreseeable future.

  • XenonJFt@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    The insanity that after 3 generations. Windows kernel still can’t priorotise P-E core usage in games and background desktops. parking them still gives them better results. AMD cache was kinda acceptable on 7950x3d vs 7800x3d debait because games cant utilise that much cores anyway.

    And its that all that bios and mobo hoops you have to go through to be compatible for 2 titles.

    Intel mostly abandoned ship on any gaming competitiveness. The clock speeds and high tgp is at least has its use in workloads

  • battler624@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I have no idea how it works but its probably moving everything away from p-cores that isn’t the game itself and keeps the game restricted to P-Cores.

    • Knjaz136@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Question is, why Windows doesnt have that option.
      Instead of core affinity, just restricting cores to manually defined task, forbidding everything else.

      • F9-0021@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Because Microsoft, the biggest software company in history, cannot make good software.

  • GenZia@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    From what I’m seeing, even with APO enabled, only 4 E-Cores are actually doing anything. The rest of the cluster is parked, doing absolutely nothing.

    Actually, that’s false. They’re actually consuming power, how miniscule it may be!

    And that’s one of the many reasons I don’t understand why Intel is stuffing so many E-Cores into their CPUs. Their practicality in real-world scenarios is mostly academic from the perspective of most users.

    A quad-core or - at most - an octa-core cluster of E-Cores should be more than enough for handling ‘mundane’ background activity while the P-Cores are busy doing all the heavy-lifting.

    Frankly, I just can’t help but feel like the purpose of these plethora of little cores it to artificially boost scores in multi-core synthetic benchmarks! After all, there are only a handful of ‘consumer-grade’ programs which are parallel enough to actually make use of a CPU with 32 threads.

    Anyhow, fingers crossed for Intel’s mythical ‘Royal Core.’ A tile-based CPU architecture sans hyper-threading sounds pretty interesting… at least on paper.

    • VankenziiIV@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      You think e cores are only for synthetics? What if I show you 6p+6e or 6p+8e can defeat 8p in real world applications?

      • GenZia@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Well, applications are definitely getting optimized for 8C/16T as of late so it won’t be all that surprising.

        Hyper-threaded threads (hyper-threads?) can’t match an actual core by design, after all.

        However, I’m merely question the addition of 8+ E-Cores in Intel’s high-end SKUs. I believe I explicitly mentioned that I can see the potential of integrating 4 to 8 E-Cores into a CPU.

        • VankenziiIV@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          What if I showed you Intel 12th 6p+6e was able to defeat amd’s 8p in real world applications 2 years ago?

          • GenZia@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            A quad-core or - at most - an octa-core cluster of E-Cores should be more than enough for handling ‘mundane’ background activity while the P-Cores are busy doing all the heavy-lifting.

        • carpcrucible@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          It’s perfectly reasonable for high-end SKUs.

          You either have single-threaded workloads or games that might use 6-8 threads at most. Or you have “embarrassingly parallel” workloads like rendering or all sorts of scientific computing that will use as many cores as you have.

          If you literally only game on your PC then I guess just disable the e-cores.

    • liesancredit@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      The 10900K was the last best designed intel CPU. Just straight up 10 powerful cores. That’s how a CPU should be.

      • dudemanguy301@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        ah yes who could forget the absolute TRIUMPH of the same tired architecture recycled for the 4th time in a row, on the same tired process recycled for the 5th time in a row.

    • soggybiscuit93@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      More E cores aren’t for “mundane background tasks”. They’re to maximize MT performance in a given die space.

      It’s why 8+16 14900K competes with 7950X in MT applications, but would clearly lose if it was the alternative 12+0.

      Most people, myself included, would struggle to really utilize 32 threads. But the 7950X and 14900K exist for those that can or may be able to.

      • GenZia@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        They’re to maximize MT performance in a given die space.

        And I never said otherwise.

        I explicitly mentioned that more E-Cores can boost scores in multi-threaded synthetic benchmark and - in turn - any parallel workload.

  • Due_Teaching_6974@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Intel’s E cores doing what they are supposed to on 2 games and 2 years after their debut, and only on their newest cpu lineup, peak Intel engineering right here

    • AgeOk2348@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      and they refuse to let people buy cpus without them, cant let amd win every bench mark that the vast majority of gamers will never use

    • siazdghw@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      It’s the exact opposite of what you’re saying.

      Intel’s E-cores + Thread Director work perfectly fine 98% of the time, but there are edge cases where the Windows Scheduler cant get it right, even with the hints from Thread Director, and that’s where APO comes in, to manually force the correct scheduling.

      Also lets not pretend that AMD isnt suffering scheduling issues themselves, the 7950x3D and 7900x3D are shunned because they have WORSE scheduling in games as they rely on the Windows Scheduler to just try and figure things out itself, and that doesnt usually work with 2 CCD’s with one having a higher frequency and the other more cache.

      • shopchin@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Importantly, you think the fix will come for 12/13 gen Intel? You seem to know what you are talking about.

    • splerdu@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I mean unless you’re Apple and have full top to bottom control of your hardware and software stack it takes some time for software to catch up with the hardware.

      Took a while for games to use MMX, SSE, AVX. Stuff that uses AVX512 can probably be counted on one hand.

      Good ray traced games are becoming mainstream just now, two whole generations after GeForce 20 series.

      I do begrudge Intel for holding this back from 12th and 13th gen users though.

      • p3ngwin@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Took a while for games to use MMX

        Even Intel’s 1st iteration of MMX was a kludge, as it used the floating point unit, so you could either use FP, or MMX, but not both simultaneously o.O

        Took awhile for that to be separated and gain the benefits of both available together.

        Intel also added 57 new instructions specifically designed to manipulate and process video, audio, and graphical data more efficiently.

        These instructions are oriented to the highly parallel and often repetitive sequences often found in multimedia operations.

        Highly parallel refers to the fact that the same processing is done on many different data points, such as when modifying a graphic image.

        The main drawbacks to MMX were that it only worked on integer values and used the floating-point unit for processing, meaning that time was lost when a shift to floating-point operations was necessary.

        These drawbacks were corrected in the additions to MMX from Intel and AMD.

        https://www.informit.com/articles/article.aspx?p=130978&seqNum=7

    • F9-0021@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      More like peak Microsoft engineering, since this is something that was always supposed to be done by the operating system. Microsoft is so awful Intel had to do it themselves.

    • msolace@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      which just shows the scheduler is wrong. which people who cared to put the effort in already did manually with lasso. the only missing piece is random main kernel threads jumping on to p cores. AMD scheduler isn’t perfect either. And both companies are going big/little. so plenty of room to keep improving.

      • CascadiaKaz@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        correction: it shows that Intel Thread Director is wrong, and that the scheduler shouldn’t trust it.

        • SkillYourself@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Thread Director doesn’t do any directing, it’s a a set of new registers the OS scheduler is supposed to read for feedback on how well a thread is running on a core. If APO can do it right, it means the scheduler is wrong.

          15.6 HARDWARE FEEDBACK INTERFACE AND INTEL® THREAD DIRECTOR

          Intel processors that enumerate CPUID.06H.0H:EAX.HW_FEEDBACK[bit 19] as 1 support Hardware Feedback Interface (HFI). Hardware provides guidance to the Operating System (OS) scheduler to perform optimal workload scheduling through a hardware feedback interface structure in memory.

          • CascadiaKaz@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            facepalm are you daft?

            how the scheduler gets information from ITD doesn’t change what ITD does.

  • advester@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    The most interesting thing is that APO dropped the power from 190W to 160W while increasing the performance.

  • No-Roll-3759@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    12600k owner. i’m so frustrated. big.Little has never delivered on the behavior they promised, and now i’m being locked out of the fix. forcing me over to windows11 was not a fix, it was just aggravation.

    i early adopted the new arch because i really wanted to use an optane accelerator. intel quietly software locked 12th gen out of optane support, so when i built my system i spent an hour poring through the bios trying to figure out how to get it running and wondering why intel’s web instructions weren’t working for me.

    overall it’s been a pretty bad experience, and one intel curated for me. based on my 12600k experience i’ll be very reluctant to adopt intel proprietary technologies in the future.

  • zakats@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Ah, right, that’s why I wouldn’t have bought Intel. My fault for forgetting.

  • benefit420@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I can’t get this to work on an ASUS Z790-E board. I tried both the ASUS DTT drivers and someone suggested trying the ASROCK DTT drivers. The ASROCK ones installed just fine but the apo app still says failed to connect

  • Gawdsauce@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Glad I went with AMD, I knew Intel would fuck that shit up one way or another, they don’t care about the consumer space, they care about the server market and nothing else.

  • nohpex@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Has anyone else seen these videos where people change the frequency (I believe*) of how often Windows has an interrupt request to check the power of the system to reduce overall system latency.

    For whatever reason, Windows checks this every 15ms, but people are changing it to the maximum setting of 5,000ms, which reduces latency for the CPU considerably… apparently fiddling with this setting is particularly bad for AMD’s X3D chips.

    What are the pros and cons to this? Has any reputable journalist looked into this?

    • veotrade@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      It works. Set to 5000ms, which is the max value.

      It’s garbage that end users need to do any tweaking at all.

      A good number of tweaks are unproven and famously just bog down the system even more.

      As a casual user myself, I wouldn’t even know if changing one setting, let alone dozens of settings, makes a difference. I’m not qualified to test, so on some of these “fixes” I just blindly follow the advice of the tutorial.

      But disabling e cores, and changing the frequency 15ms->5000ms have helped me.

      I also have prescribed to the LatencyMon optimizations. Like setting interrupt affinity masks for my gpu, ethernet, and usb host controller.

    • ConsciousWallaby3@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I also went for a 12400 over more expensive options at the time because not only was it good value, I also wasn’t interested in the experience of being an early adopter for mixing different cores on Wintel.

  • Knjaz136@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Isn’t this basically a thread scheduler fix that makes E cores do what they are actually supposed to do?

    And they are reserving this fix for 14th gen only for, seemingly, no reason? With a good chance that they had this fix for a while, but management decided to reserve it for 14th gen?

    This is what I’m reading from their reply to HUB.

    • reddanit@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Well, it does look like it’s just a scheduler fix at the very surface level. On the other hand it does seem to need some firmware support and presumably there is some reason why it only supports 2 games. So maybe it is something more complicated?