I have been playing with different image upscaling neural networks : letsenhance.io and Nividia's GameWorks SuperResolution.

I finally used Nvidia's solution (it took some time to be in the beta!) : Infinite amount of requests, better results.

little drawbacks : max output size is 8192, so, for 8x it's a 1024 input, etc... 2 files processed at the same time max.


The process was to pack doom textures into different 1024x1024 pngs (7 iamges), then get the 8x upscaled versions (using 2 different techniques), then blend those results together as they both have qualitys and issues, downsize to 4096 with bicubic supersampling to blend some noise, then downsize to 2048x2048 with nearest neighbour supersampling to keep the sharpness feeling of original doom textures.

the 4x and 8x versions have too much funny AI artistic style artifacts to be used

Unfortunately this comes with some unwanted pixels here and there and a heavy de-noising work was needed. Also the contrast is changed, bright details are brighter, and dark one are darker too, this had to be cleaned as well by removing them and leting the original texture color appear.

The clean isn't perfect but a first alpha release stage has been reached and I can test it ingame...


but I'm stuck, I can't make this work and I need some help to be able to launch gzdoom with the -file parameter and a folder with all Hires pngs . then I can fine tweak from photoshop to gzdoom easily.

If someone more skilled than me with gzdoom modding could take a look

My files (pngs, notworking pk3s) are shared in this google drive folder


Here are some previews of the results (note : the transparency of the vine has been done manually, sadly the upscaler can't handle black and white shapes, but It's not that hard to do manually)

Dragonfly : I re-uploaded transparent textures as transparent pngs without cyan background

DooM_RO : I uploaded some 8k sources here, I'll take look in smoothdoom files to see if I can upscale them at the same time of the original ones. Doom2 minor sprite fixing project seems also to be taken into consideration. what bother me is that it's no longer a Hires override pak but a full mod (actually, as I can't make Hires folder work, it's already no longer a Hires pak lol).


Doom Rl Download


Download Zip 🔥 https://urlgoal.com/2y38Ky 🔥



Linguica : I'm still fighting with my files to make this work worrectly, with no success :(.

-I split Flats and Patches, did a flats.txt and a patches.txt. only the scale of flats is working, patches scales are still ignored.

-I tried to understand how the HD-Textures.pk3 works, because it does, ( as opposed to mine :D), and still has crazy filenames different from the ones in doom2.wad. The .txts in that file are just appearing as "readme", the patch/scale informations are hidden somehow.


Sound familiar? The phrase ?(????) might be new, but the practicing of ridiculing doomers goes back at least a half century. In the case of climate change, those who did the ridicule were on the wrong side of history.

And you should care about p(catastrophe | widespread AI adoption) whatever you think about p(doom). The risks of current AI (bias, defamation, cybercrime, wholesale disinformation, etc) are already starting to be well documented and may themselves quickly escalate, and could lead to geopolitical instability even over the next few years.

In conclusion: this morning when I could think more clearly 'cause I wasn't standing on a stage, I thought the overall probability of doom was 19% .. but I don't think you should listen to that very much 'cause I might change it tomorrow or something.

For what it's worth, here's a slightly longer overview on my own current preferred approach to estimating "p(doom)", "p(catastrophe)", or other extremely uncertain unprecedented events. I haven't yet quite worked out how to do this all properly though - as Gary mentioned, I'm still working on this as part of my PhD research and as part of the MTAIR project (see -model-based-approach-to-ai-existential-risk). The broad strokes are more or less standard probabilistic risk assessment (PRA), but some of the details are my own take or are debated.

Step 1: Determine decision thresholds. To restate the part Gary quoted from our email conversation: We only really care about "p(doom)" or the like as it relates to specific decisions. In particular, I think the reason most people in policy discussions care about something like p(doom) is because for many people higher default p(doom) means they're willing to make larger tradeoffs to reduce that risk. For example, if your p(doom) is very low then you might not want to restrict AI progress in any way just because of some remote possibility of catastrophe (although you might want to regulate AI for other reasons!). But if your p(doom) is higher then you start being willing to make harder and harder sacrifices to avoid really grave outcomes. And if your default p(doom) is extremely high then, yes, maybe you even start considering bombing data centers.

So the first step is to decide where the cutoff points are, at least roughly - what are the thresholds for p(doom) such that our decisions will change if it's above or below those points? For example, if our decisions would be the same (i.e., the tradeoffs we'd be willing to make wouldn't change) for any p(doom) between 0.1 and 0.9, then we don't need any more fine-grained resolution on p(doom) if we've decided it's at least within that range.

Step 2: Determine plausible ranges for p(doom), or whatever probability you're trying to forecast. Use available data, models, expert judgment elicitations, etc. to get an initial range for the quantity of interest, in this case p(doom). This can be a very rough estimate at first. There are differing opinions on the best ways to do this, but my own preference is to use a combination of the following:

- I currently lean towards trying to specify plausible probability ranges in the form of second-order probabilities when possible (e.g., what's your estimated probability distribution for p(doom), rather than just a point estimate). Other people think it's fine to just use a point estimate or maybe a confidence interval, and still others advocate for using various types of imprecise probabilities. It's still unclear to me what all the pros and cons of different approaches are here.

Step 3: Decide whether it's worth doing further analysis. As above, if in Step 1 we've decided that our relevant decision thresholds are p(doom)=0.1 and p(doom)=0.9, and if Step 2 tells us that all plausible estimates for p(doom) are between those numbers, then we're done and no further analysis is required because further analysis wouldn't change our decisions in any way. Assuming it's not that simple though, we need to decide whether it's worth our time, effort, and money to do a deeper analysis of the issue. This is where Value of Information (VoI) analysis techniques can be useful.

Step 4 (assuming further analysis is warranted): Try to factor the problem. Can we identify the key sub-questions that influence the top-level question of p(doom)? Can we get estimates for those sub-questions in a way that allows us to get better resolution on the key top-level question? This is more or less what Joe Carlsmith was trying to do in his report, where he factored the problem into 6 sub-questions and tried to give estimates for those.

One potential advantage of factorization is that it allows us to ask the sub-questions to different subject matter experts. For example, if we divide up the overall question of "what's your p(doom)?" into some factors that relate to machine learning and other factors that relate to economics, then we can go ask the ML experts about the ML questions and leave the economics questions for economists. (Or we can ask them both but maybe give more weight to the ML experts on the ML questions and more weight to the economists on the economics questions.) I haven't seen this done so much in practice though.

One idea I've been focusing on a lot for my research is to try to zoom in on "cruxes" between experts as a way of usefully factoring overall questions like p(doom). However, it turns out it's often very hard to figure out where experts actually disagree! One thing I really like is when experts say things like, "well if I agreed with you on A then I'd also agree with you on B," because then A is clearly a crux for that expert relative to question B. I actually really liked Gary's recent Coleman Hughes podcast episode with Scott Aaronson and Eliezer Yudkowsky, because I thought that they all did a great job on exactly this.

The first phase of our MTAIR project (the 147 page report Gary linked to) tried to do an exhaustive factorization of p(doom) at least on a qualitative level. It was *very* complicated and it wasn't even complete by the time we decided to at least publish what we had!

I was at a workshop in the Bay Area recently, and the \u201Cicebreaker\u201D was, \u201Csay your name and give p(doom)\u201D. Usually when people are talking about p(doom), they are thinking specifically about AI \u2013 could AI kill us all? What\u2019s the odds of that? A lot of people in Silicon Valley are pretty worried that p(doom) might be something other than zero; some even seem to put the number at ten percent or higher. (Eliezer Yudkowsky seems to put it at near 100%). One survey, which was not perhaps entirely representative, put it a lot higher than that. And that was before GPT-4 which some people (not me) think somehow increases the odds of doom.

Meanwhile, p(doom) per se is not the only thing we should be worried about. Perhaps extinction is vanishingly unlikely, but what I will call \uD835\uDC29(\uD835\uDC1C\uD835\uDC1A\uD835\uDC2D\uD835\uDC1A\uD835\uDC2C\uD835\uDC2D\uD835\uDC2B\uD835\uDC28\uD835\uDC29\uD835\uDC21\uD835\uDC1E) \u2013 the chance that an incident that kills (say) one percent or more of the population, seems a lot higher. For literal extinction, you would have to kill every human in every nook and cranny on earth; that\u2019s hard, even if you do it on purpose. Catastrophe is a lot easier to imagine. I can think of a LOT of scenarios where that could come to pass (e.g., bad actors using AI to short the stock market could try, successfully, to shut down the power grid and the internet in the US, and the US could wrongly blame Russia, and conflict could ensure and escalate into a full-on physical war). ff782bc1db

acca imtahanlari

download old hindi songs free online

download apk batman the dark knight rises

my singing monsters playground free download

download teks pembukaan undang-undang dasar 1945