I'm trying to do an experiment in VB6. I haven't used VB6 in forever, but in C++, if you enabled DirectX, OpenGL, or Vulkan, doesn't matter really, and you have a NVidea based graphics card, adding this one line of code makes your graphics program run a hell of a lot faster than usual, usually 3 to 5 times faster because it is utilizing the GeForce portion of your video card:
As a result, you get the GeForce experience tab, and it shows the FPS, not to mention faster graphics:
I understand that VB6 is limited, but I've done much worse things that pushed it to its limits back in my day. And since it is also capable of doing DirectX, it can potentially use the GeForce as well.
The question is, how can I write extern "C" { _declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001; } in VB6?
Ya think if I write a C++ dll file containing this one line would do the trick? I should try it right now to give it a whirl
The idea is to have a C/C++ compiled *object* file which can be *staticly* linked to your final VB6 executable and thus force the linker to add the "magic entry" in PE file's export table.
Tried putting the extern "C" { _declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001; } inside the enable_geforce_experience() method but it didn't work because it can only be used outside. So I made it empty as a dummy method. Plugged it into some random VB6 project that had a fullscreen DirectX thing going as an API, called it, and nothing. Absolutely nothing.
I dunno if making a COM object would work. But I'm gonna have to try tomorrow. It's 4am here now rofl!
Try compiling Project1.vbp from this NVidia.zip sample project. It does get NvOptimusEnablement exported as first ordinal in export table (dumpbin.exe /exports)
Code:
Export Table:
Name: Project1.exe
Time Date Stamp: 0x5EB13DFF (5.5.2020 13:20:47)
Version: 0.00
Ordinal Base: 1
Number of Functions: 1
Number of Names: 1
Ordinal Entry Point Name
1 0x00002000 NvOptimusEnablement
How did this happen?
First I compiled nvidia.cpp from VS2016 x86 Native Tools Command Prompt with something like this
c:> cl -c nvidia.cpp
which produced nvidia.obj (not a .dll)
Then I opened Project1.vbp and manually added these two lines
Code:
[VBCompiler]
LinkSwitches=nvidia.obj
Finally compiled the VB6 project to a folder where file nvidia.obj is present.
If I had tried to compile to c:\temp for instance the linker wouldn't be able to find nvidia.obj in target folder (unless I put full pathname in LinkSwitches beforehand).
There also seems to be some confusion, on what it does exactly.
It will only enforce a preference (of the NVidia-driver, though restricted to code-sections which make use of the GPU)...
And it does that only, when you have an additional GPU in your system...
(as e.g. in Notebooks who have an integrated Intel- or AMD- default-GPU + sometimes an extra-NVidia-chip).
Without this export, there's also an easy way to startup an Executable with GPU-preference.
E.g. on my Win10 - via RightClick-ContextMenu:
> Run with graphicsProcessor > Integrated graphics (default)
> Run with graphicsProcessor > High Performance NVIDIA processor
This will achieve the same effect without any export-defs (when you choose NVIDA).
With the export-def in place (as shown above), the Context-Menu-choice is just "overridden"...
(the executable will then always startup with the NVidia-preference (if the system has an NVIdia-GPU installed).
There is no need for an *.obj modue... instead it is sufficient:
Yes, this might work if the check in the client code is NvOptimusEnablement != 0 because the first couple of instruction in the VB6 function prolog are definately *not* all zeroes.
Yes, this might work if the check in the client code is NvOptimusEnablement != 0 because the first couple of instruction in the VB6 function prolog are definately *not* all zeroes.
Yep, that's what I was counting on - and it worked as expected ...
(I did test this with an RC5-CairoSurface in "D2D"-UploadMode on Win10)
And sure, defining a Zero behind the exported Symbol is not possible this way (using a procedure-def).
But that (exporting a Zero) is probably not what this export is meant to accomplish (for those who use it in C/C++).
Also experimented a bit, to export (from a *.bas-module):
- a normal Public Const NvOptimusEnablement As Long = 1
- as well as a normal Variable as: Public NvOptimusEnablement As Long
(along with the same *.vbp Linker-section)
... but this didn't work (an "unrecognized symbol" was thrown at me)
I've then tried to make it easier for the Linker, by explicitely specifying a *.def file, but "no cigar" either...
But the little Sub worked "well enough" (even without the *.def).
Global Variable NvOptimusEnablement (new in Driver Release 302)
Starting with the Release 302 drivers, application developers can direct the Optimus driver at runtime to use the High Performance Graphics to render any application —- even those applications for which there is no existing application profile. They can do this by exporting a global variable named NvOptimusEnablement. The Optimus driver looks for the existence and value of the export. Only the LSB of the DWORD matters at this time. A value of 0x00000001 indicates that rendering should be performed using High Performance Graphics. A value of 0x00000000 indicates that this method should be ignored.
A bit hanging on the edge. . . although we've seen worse :-))
They definately want to be able to bump it to 0x00000002 or impl a bitmask with feature requests.
They definately want to be able to bump it to 0x00000002 or impl a bitmask with feature requests.
Jacob could answer the question, whether NVidia explicitely checks for the lowest Bit in the Export,
by compiling and running his existing C++App with this setting:
Jacob could answer the question, whether NVidia explicitely checks for the lowest Bit in the Export,
by compiling and running his existing C++App with this setting:
Just tested this myself, and what NVidia currently does, is not as they've specified in the Documentation...
They are not checking explicitely whether the LSB is at 1 -
they just treat the exported Value like a bool-expression (everything <> 0 is true).
Even if they'll check the LSB in the future, the "empty-sub" export should work further,
because its "prolog" == "epilog" == &HC3 == just the x86-RETN-instruction.
Schmidt, _declspec(dllexport) does more than methods. It also does variables. NvOptimusEnablement is a variable, so the Sub Export trick is not gonna work. I even tested it on a sample DX app I made in VB6 years ago and got nothing. No GeForce tab or FPS. Even if that were to be the trick, it would be a function that returns a 4 byte value (DWORD) of &H1. I also tried extern "C" { _declspec(dllexport) DWORD NvOptimusEnablement = 0x00000002; } just for funzies and it does nothing. Only the values of 0x00000001 and 0x00000000 are known from this:
wqweto, I tried your obj file, and also got nothing. It was an outstanding try though and seemed promising. There is something that is missing. Maybe something Visual Studio 2019's C++ automatically included that VB6 never did. Otherwise why would enabling such a value triggered the GeForce?
why is this not automatic when using directx?
this NvOptimusEnablement is only working when directx is used right? maybe also d2d?
The trick created the typelib for directx9 and direct2d, is it not possible to add NvOptimusEnablement in those typelibs?
Yes baka, it's only when a graphics library is used such as DirectX, OpenGL, or Vulkan. This one line of code taps into the GeForce part of your NVidea GeForce video card, unleashing superb speed.
Anyway I have all the GeForce tests here. Both their methods, and my C++ app. Unfortunately I could not include the DirectXTK part of my C++ app cause it was 158 megabytes, so that you would have to do manually using Project > Manage NuGet Packages... and add directxtk_desktop_2015
There is a bunch of symbols within the library "C", NvOptimusEnablement being one of them. I'm assuming Visual Studio automatically includes this in any C++ project. And that could potentially have something to do with it.
Last edited by Jacob Roman; May 5th, 2020 at 02:54 PM.
Good news. I'm getting closer to my goal. Found a working solution for C#, which could be the solution for VB.Net. And with that said, I can potentially convert it to VB6:
so the API nvapi.dll need to be called and the procedure "fake", that will result in an error.
"NvAPI cannot be dynamically linked to applications. You must create a static link to the library and then call NvAPI_Initialize(), which loads nvapi.dll dynamically."
Schmidt, _declspec(dllexport) does more than methods.
Of course... that was the whole point of the "LS-Bit"-discussion between wqweto and me.
The convenient Linker-Switch in a VB6 *.vbp only works (without hacking around)
for procedures (Subs and Functions in a *.bas).
Originally Posted by Jacob Roman
NvOptimusEnablement is a variable, so the Sub Export trick is not gonna work.
I've already mentioned that it worked for me - so please don't guess,
check it out instead via a concrete (compiled) example.
Originally Posted by Jacob Roman
I even tested it on a sample DX app I made in VB6 years ago and got nothing.
Yep, and that's entirely explainable - because the so christened "Optimus-preference-GPU-switchery" -
only works from DX9 onwards (it is not supported on DX8, which you've used in your VB6-Demos).
Originally Posted by Jacob Roman
Even if that were to be the trick, it would be a function that returns a 4 byte value (DWORD) of &H1.
Wrong. Please inform yourself about "Exports of named symbols" in C or Assembly-forums.
What I've discussed with wqweto, is the "functions assembly-code behind a symbol" (starting with a Prolog) -
and not its return-result (which is a completely different thing).
The NVidia-switchery doesn't know that the exported Symbol(Name) points to a Function(Prolog).
It just reads the 4 DWord-Bytes "behind the pointer, a given SymbolName resolves to".
Originally Posted by Jacob Roman
I also tried extern "C" { _declspec(dllexport) DWORD NvOptimusEnablement = 0x00000002; }
just for funzies and it does nothing.
You either didn't test this, or you messed up your executable-location, or something...
My guess is, that you didn't compile and test it at all (with Value=2),
because your "VB6-tests were already failing"...
I've downloaded your Zip, and compiled your VC++Project (in Release-Mode, 64Bit),
using { _declspec(dllexport) DWORD NvOptimusEnablement = 0x00000002; } -
and the resulting executable switched perfectly from the Intel-default-GPU to the NVidia-GPU.
My system runs latest VC-2019, a fully upgraded Win10 - with the latest WHQL-certified NVidia-drivers.
As for VB6-Demos, which will show that the "symbol-export-switch" indeed works...:
- you'll have to use either a typelib for DX9 or higher
- or just use the patform-independent OpenGL
(which BTW offers an unchanged and stable API for those who want to do GPU-accelerated 3D-stuff in VB6,
it will always use the "latest and greatest" GPU-Driver that's installed on a given system (without any chance for DX-version-lockouts).
FWIW, below comes an OpenGL-Demo (not written by me, only slightly adapted)...
It does contain the NVidia-Preference-enforcement-switch (via the little "empty Sub"),
and also the accompanying Linker-Switches in the *.vbp.
And here a ScreenShot:
(OpenGL0 is a binary I've compiled without the Linker-Switches, OpenGL1 the one which enforced the switch to NVidia).
One can see, that OpenGL0 is using the default-GPU (GPU0) and OpenGL1 the "other GPU on my Notebook" (NVidia).
This one line of code taps into the GeForce part of your NVidea GeForce video card, unleashing superb speed.
No, it doesn't.
The exported Symbol you're talking about - just does one single thing:
- when more than one GPU is installed on a given system...
- it will ignore the "usual default-GPU" (mostly a less powerhungry integrated GPU from Intel)
- and instead forces the executable to "switch to the NVidia-Chip" (and Drivers) at startup
And it does that switchery on "all Values behind the exported Symbol-Pointer",
which are different from Zero (an explicit value of 1 is not the only Value which works).
You know, I believe you are right about all of that, and yes I admit I was wrong about things. But bare with me. I'm experimenting here and only going by observations. Just cause things work on other machines and not mine, well, there is no way for me to know unless I do this on other machines as well. One things for certain, it could be that it is only for DirectX9 apps or higher, which is why I am not seeing it in my old vb6 DirectX apps. But there is one thing you were wrong on. I did compile it in C++ using 0x00000002, and it had no effect on my end (even did it again to be sure). And yes I even ran the OpenGL app you sent me. And still do not see it.
I also witnessed a number of observations:
1) Using extern "C" { _declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001; } on an app, followed by compiling and running it, and then commenting it out, compiling it and running it again showed GeForce Experience staying within the app regardless of commenting it out.
2) Using extern "C" { _declspec(dllexport) DWORD NvOptimusEnablement = 0x00000000; } on an app, followed by compiling and running it, prevented my app from running, and gave me an error that it could not create device.
3) Using extern "C" { _declspec(dllexport) DWORD NvOptimusEnablement = 0x00000002; } on an app, followed by compiling and running it gave me the same result as 0x00000001 and simply just enabled the GeForce Experience.
4) Using extern "C" { _declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001; } on an app, followed by compiling and running it, and running a completely different DX app in Visual Studio that does not have this line of code allowed that app to also run using the Geforce Experience.
5) In the past when playing around with extern "C" { _declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001; }, I used to be able to have this appear in both fullscreen and windowed mode. But lately I only been able to see it in fullscreen Mode.
So with all that said, I believe it is an internal NVidea setting that I enabled. I also believe there was another NVidea setting I may have changed in the past regarding never seeing it in windowed mode, which probably explains why I can't see it on the OpenGL app you sent me.