Yeah, you can just close the app and continue from where you left off, except that it will continue from the latest save point, so best is to do that at a round iteration number (31000, 32000, 33000, etc.). It depends on what you set for the frequency of saves. You can also go to the training/models directory and look at what intervals the saves are being done. I always set the save during training to be at 1000 iterations and 2500 iterations for backup. This means that when I quit training in the worst case, I only lose 30 minutes of compute (1000 iterations ~ 30 minutes of computation). But in practice, I just look at the numbers and do something else until it reaches the 1000 mark, saves, updates the validation score, and then quit.
Batchsize 58, I'm jealous

. I think you can actually set your batch size to probably something like 80. The thing is: you can just try to set it as high as you can, and if it's too high, it'll just bug out at the start with an error. So then you lower it a bit. I then put it a little bit lower, just to make sure that it won't crash over night when it's running. But I have 12 Gb GPU with batch size 40 (11 Gb reported) so you should be able to get double that with 23 Gb free report available.
Doing that will increase your speed by 80/58 another 37%.
Looking good so far, loss looks good and attention score is in the 0.60-0.70. Did you stop training and do some synthesis yet? At 1000 epoch you can probably get a feel for the voice already (but you still want more epochs).
A small tip, I'd also advise you to copy and paste the reports it gives you, with all the validation errors, to some notepad file somewhere. Reason being: what you want to do is an "early stop". Meaning: if you train this thing to 10000 epochs, there is a risk, that at some point the neural net becomes overtrained. You can see when that happens, that the validation score isn't increasing for a long time. So what you are looking for, is the epoch number, where the validation score is at it's maximum-plateau value, before it starts wobbling around that plateau. To put it differently: keep checking the validation scores to make sure that they are still improving. Sometimes they drop a little and increase a little, but in the long run, they should still improve.
Anyway, I'm very curious how the voice sounds. And I'd most definitely train this to 10000 epochs (especially at the speed you can train) and then look at the validation scores in a notepad somewhere, if it started plateauing somewhere before the 10000 epoch mark or not. If so, just load the model at the epoch where it started plateauing and use that.