![]() The faster Gauss Jacobi implementation is not only significantly faster than every other implementation, but it does not seem to increase with array size like the other methods. N Gauss-Jacobi Gauss-Jacobi Fast Gauss Seidel SOR - w=1.5 I ran randomized tests on 100 NxN Diagonally dominant matrices for each N = 4.20 to get an average number of iterations until convergence. ![]() ![]() Both of these were implemented in a similar way to my original, slow Gauss-Jacobi method. I've implemented two other methods, the Gauss-Seidel Method and the SOR method. What makes the second implementation so much faster than the first? The first implementation takes 37 iterations to converge with an error of 1e-8 while the second implementation takes only 7 iterations to converge. def GaussJacobi(A, b, x, x_solution, tol): The second implementation is based on this article. X_new = np.zeros(N, dtype=np.double) #x(k+1) The first implementation is what I originally came up with import numpy as npÄef GaussJacobi(A, b, x, x_solution, tol): ![]() When implementing the Gauss Jacobi algorithm in python I found that two different implementations take a significantly different number of iterations to converge.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |