-
Jul 2nd, 2020, 03:41 PM
#1
Thread Starter
Lively Member
GDI+ Performance Issue
I have been trying to achieve a simple 3D engine in VB.NET and it's visually pretty good.
(Except i still need to add a Z-Buffer or Painter's algorithm to properly show concave shapes)
My problem is when it come to a model with a large amount of triangles(such as the monkey) the fps drop significantly to where is slightly freezes as it's rotating.
I figured my rotate function creating a new mesh every time might be too much to ask for:
vb.net Code:
Public Function Rotate_X(ByVal Angle As Single) As Mesh
Dim Sin_T As Double = Math.Sin(Angle * Math.PI / 180)
Dim Cos_T As Double = Math.Cos(Angle * Math.PI / 180)
Dim MSH As New Mesh
For Each Triangle As TriFace In arr_Faces
Dim TF As TriFace = Triangle.Clone
For Each Vertex As PVector In TF.Vertices
Dim Y As Double = Vertex.Y
Dim Z As Double = Vertex.Z
Vertex.Y = CSng(Y * Cos_T - Z * Sin_T)
Vertex.Z = CSng(Z * Cos_T + Y * Sin_T)
Next
MSH.AddFace(TF)
Next
Return MSH
End Function
So I instead made a function to fetch Vertices in a mesh and rotate them directly but that was even slower.
When it comes to models with a smaller amount of face everything runs smoothly though.
I'm using the fillpath method in in paint event; Should I switch to a bitmap and make my own raster function instead using setpixel ?
vb.net Code:
Dim new_Points() As Point = {New Point(PTriangle.Vertex_A.X, PTriangle.Vertex_A.Y), New Point(PTriangle.Vertex_B.X, PTriangle.Vertex_B.Y), New Point(PTriangle.Vertex_C.X, PTriangle.Vertex_C.Y)}
Dim TrianglePath As New Drawing2D.GraphicsPath(Drawing2D.FillMode.Alternate)
TrianglePath.AddLines(new_Points)
TrianglePath.CloseFigure()
Dim MyBrush As New Drawing.SolidBrush(TriColor)
GFX.FillPath(MyBrush, TrianglePath)
Or do I just need to use a language with access to the graphics card at this point ?
I'm kind of lost ... thnx in advance for suggestions.
-
Jul 2nd, 2020, 04:12 PM
#2
Re: GDI+ Performance Issue
There is a limit to how much graphics you can do without the graphics card. Eventually, you will hit that limit if your graphics get complicated enough. I encountered that limit on one program. The drawing was not going to be fast enough, so I tried a variety of things. The first step was to time everything so that I only focused on the parts that were actually a problem, rather than just the ones I thought could be problems. You seem to have already done that step, though.
Once I had the key points identified, I optimized them as much as I could, but that will never do more than a few percent improvements. In my case, there was an obvious improvement because I could cache all the images that were being drawn. The big issue I ran into was that I had up to a few hundred items being drawn, and the user could zoom in and out, which could result in those few hundred items all being composed and drawn. Since that wasn't fast enough, I cached all the images at each zoom level such that I only had to redraw the ones that had changed at any time. This was fast, but only worked as long as there weren't more than a handful that had to be redrawn at any point in time. The drawback was that, if I ever had to redraw everything, it was FAR slower. At first, that didn't matter to me, because I rarely had to redraw everything. Once I found that there were too many times when I did have to, the whole approach fell apart.
At that point, I switched to XNA, which was a framework that made use of the graphics card. Naturally, MS abandoned XNA a few months later, but it lives on in the MonoGame framework.
My point is that no matter what you do to try to improve the performance with GDI, considering the complexity of the graphics, you will almost certainly exceed what the CPU is capable of, and will need to make use of the graphics card. That doesn't mean abandoning the language, though, as MonoGame will do it, and is .NET. The other option would be WPF, which also uses the graphics card, but that is likely to be a bigger step than MonoGame.
My usual boring signature: Nothing
-
Jul 6th, 2020, 07:36 PM
#3
Re: GDI+ Performance Issue
This kind of thing should be done on a GPU. The method you're using is utilizing the CPU to perform those transformations. You need to use a library like DirectX or OpenGL which can perform operations on triangles using the graphic adapter's GPU. Graphics operations tend to be highly parallelizable and GPUs excel at parallel operations.
Tags for this Thread
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|