Eliminates a potential bug vector related to inheritance. Plus, we
should generally be specifying the destructor as virtual within purely
virtual interfaces to begin with.
Amends implicit sign conversions occurring with usages of std::reduce
and also relocates it to its own utility function to reduce verbosity a
little bit.
When this was being made mandatory, these enablement of these features was removed, but this is still needed.
Fixes: 757fd1e917 ("vulkan_device: Require VK_EXT_robustness2")
Else the fence might get submited out-of-order into the queue, which makes testing them pointless.
Overhead should be tiny as the mutex is just moved from the queue to the writing code.
This was implicitly done by `is_powered_on = false`, however the explicit method allows us to block until the GPU is actually gone.
This should fix a race condition while removing the other subsystems while the GPU is still active.
It shall block until there is something to consume in the queue.
And use it for the GPU emulation instead of the spin loop.
This is only in booting the emulator, however in BOTW this is the case for about 1 second.
Avoid sending null pointer to memcpy as reported by Undefined Behaviour
Sanitizer. Replaces the std::memcpy calls in SpliceVectors with
std::copy calls. Opting to replace all the memcpy's with copy's.
Co-authored-by: LC <mathew1800@gmail.com>
Currently, the Windows versions of the Intel OpenGL driver and the AMD
proprietary OpenGL driver do not properly support (or in fact degrade)
when asynchronous shader compilation is enabled. This blocks
specifically those drivers from using this feature. This affects
AMDGPU-PRO on Linux, and AMD's and Intel's OpenGL drivers on Windows.
ASTC texture decoding is currently handled by a CPU decoder for GPU's without native ASTC decoding support (most desktop GPUs). This is the cause for noticeable performance degradation in titles which use the format extensively.
This commit adds support to accelerate ASTC decoding using a compute shader on OpenGL for GPUs without native support.
In order to force the BGRA8 conversion on Nvidia using OpenGL, we need to forbid texture copies and views with other formats.
This commit also adds a boolean relating to this, as this needs to be done only for the OpenGL api, Vulkan must remain unchanged.
OpenGL does not natively support BGR internal formats, which causes many BGR textures to render incorrectly, with Red and Blue channels swapped.
This commit aims to address this by swizzling the blue and red channels on texture copies when a BGR format is encountered.
- Uses a fixed 64MB for the cache instead of an ever growing map.
- Slightly faster by using atomics instead of a single mutex for access.
- Thanks for Rodrigo for the idea.
Some games benefit from skipping caches (Pokémon Sword), and others
don't (Animal Crossing: New Horizons). Add an heuristic to decide this
at runtime.
The cache hit ratio has to be ~98% or better to not skip the cache.
There are 16 frames of buffer.
This commit removes early placeholders for an implementation of async nvdec. With recent changes to the source code, the placeholders are no longer accurate, and can cause a nullptr dereference due to the nature of the cdma_pusher lifetime.
src/video_core/shader_notify.cpp: In member function 'void VideoCore::ShaderNotify::MarkShaderComplete()':
src/video_core/shader_notify.cpp:33:10: error: 'unique_lock' is not a member of 'std'
33 | std::unique_lock lock{mutex};
| ^~~~~~~~~~~
src/video_core/shader_notify.cpp:6:1: note: 'std::unique_lock' is defined in header '<mutex>'; did you forget to '#include <mutex>'?
5 | #include "video_core/shader_notify.h"
+++ |+#include <mutex>
6 |
src/video_core/shader_notify.cpp: In member function 'void VideoCore::ShaderNotify::MarkSharderBuilding()':
src/video_core/shader_notify.cpp:38:10: error: 'unique_lock' is not a member of 'std'
38 | std::unique_lock lock{mutex};
| ^~~~~~~~~~~
src/video_core/shader_notify.cpp:38:10: note: 'std::unique_lock' is defined in header '<mutex>'; did you forget to '#include <mutex>'?