Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - contact_2

Pages: [1] 2 3 ... 9

All of the texture work (apart from HUD art) seen in this trailer was made in Designer 5.x by myself. We are a tiny indie team of 3, and are very excited to finally be launching a closed playtest of our game, Vox Machinae. If you would like to see the game for yourselves, along with the chance to help us shape the way it's being developed, feel free to sign up for the playtest.

Hope you guys like it, thanks for your interest and time.

Hi All,

I've always wanted to know if it's possible to generate output files in the TIF format with compression on. The TIF format supports both layers and compression, but I have never found a way to enable this inside of SD. Does anyone know if it's possible?

The texture sheet node on share looks great, but it's really only made for laying out source textures, and not for generating them over time. The bit that I'd really like to get help on is the generation of unique frames for the sheet itself.

Basically I am looking for a way to ouput the results of $time to maps for laying out within SD, and then combine those in a row for the resulting texture sheet. I guess this really doesn't involve Unity so much except that it's the target use for the assets hehe.

The only thing I really see having this sort of option is the frame animation output option in Player that will spit out sequential images of the frames that I would then have to re-import and hook up to offsets to get them lined up in sequence. A bit of a bummer for a workaround, as it seems this should already be possible from within SD itself.

I can't believe I missed this response for so many months since the original inquiry!

1000 times YES, please post it on share if you can, I would greatly appreciate. Basically yeah, I want to take a global normal and infuse it with additional tangent-space derived details.

It was my understanding that additional maps and inputs are required to do the correct math for the lighting to work out. I'm curious, how were you able to accomplish this using just the two input maps (gloabl normals + tangent normals)?

Please let me know when you have put it on share!

Where to begin here....

I think the key issue with this kind of response is that one should not be required to use another piece of software for what seems like a fundamental function of these kinds of programs to begin with. Sure i *could* spit off some textures with some bordering negative space on the sides to Photoshop (which I unfortunately have to do at the moment), but why is adding additional steps and potentially cost for those that don't have photoshop a good idea?

In addition, there are many valid reasons one would want to be working with non POT textures. Take the process of making or authoring cubemaps for games. The format most widely accepted by game engines these days does not conform to POT.

Coo, yeah I just feel like there is so much missed potential here for VFX artists to using this package for creating procedural effects for games and film. The power to quickly create realistic natural effects is certainly there, and with a little more exposure to the user could be pretty amazing.

Currently, we have been able to get some pretty good results for our game's (Vox Machinae) particles using SD, but are missing animated textures from the mix. I have made it so our engine (Unity) picks randomly from a set of 4-16 different texture sheet options to randomize our particles, but it could be so much better with fully animated textures. There is really no technical limitation on the game-end of things, only the issue of generating source material.

Here's an example of some of the weapon particles in action:

Hope that helps visualize the kind of effects we plan to animate.

Thanks for helping get some more eyeballs on the issue Wes, you're a superstar as usual ;-). I look forward to some possible things to try and get going here.

Thanks for the response, these sure sound like features that would be great if they existed ;-)

As such, I have heard from various people on these forums that timeline support for scrubbing frames would be valuable to have down the road. I am aware of substance player having this, though I feel iteration within SD is faster when things are within easy grasp.

So I guess there is not much way of getting this kind of behavior, short of blindly authoring frames and trying to preview manually... pity, cause it was so close to a powerful approach for VFX artists.

In my case, the idea is to NOT do any substance work in unity, as we are creating a VR game, and we need all the CPU/GPU we can get. I basically just want a texture sheet filled with frames of animation, and I was hoping SD could be used to generate them, as it supports animation and looping frame textures.


So my big question is, how to create a sprite sheet based off of a graph that will fill each offset frame on a grid and step the animation each time???

Anyone who can answer this for me gets the biggest gold star EVER.

Ya basically what you are saying is the intended purpose. I want an end result where I have a map that has multiple frames of animation layet out on a grid, and set unity's particle system to cycle through them.

I mused the idea of manually setting instances of graphs, but honestly such a manual process would be a pain if it were anything more than a dozen or so frames (I have a 4k x 8k texture sheet which horizontally spans around 48 frames, so manual is not the way here).

Basically I think there would be benifit to having a graph spit out offset frames based on the animation, and then author that within SD as a resultant texture map.

Hi All,

Just thought start an ongoing progress thread about our game Vox Machinae, in which the textures are almost entirely made using Substance Designer. I'd like to start by sharing a bit of promotional material that showcases some of the work being done. Enjoy!


Seriously, it would be lovely for animation in general within Substance Designer to receive some tutorial attention. Furthermore, the request you are making just so happens to be of extreme relevance to me as well, so I would greatly appreciate if one of you wonderful SD folks explains how to accomplish this!

Specifically I am looking into setting up frame animated particles for Unity, and the format their use is offset UV sequences. SD is so powerful for producing amazing animated procedurals that it's just a shame it's not being put to good use. Lets figure this out once and for all!

(On a technical note, I am vaguely familiar on how to make SD lay out grid frames that are offset thanks to a neat tutorial curtessy of THE Wes. Now if that can be combined with generating animation? beauty.)


Yeah that does sound like a sticky situation.

In my mind, it's a bit nebulous to have a feature that seems to only work some of the time for some machines without providing some form of notice or information to the user that it may or may not work. I have spent hours running through different scenarios to try to get it to work in our pipeline, and had I known in the first place that it simply might not work because of ***, then I would have saved myself trouble and time.

Point is, it helps for the users to know the situation, rather than be left guessing as to the culprit. And in my case the only provided indication of a workaround was buried in the documentation of a sister software (I use SD, but the workaround is documented for Painter only) And even then, it did not address the problem.

This is not meant to sound negative, so I apologize if it comes off that way. I really do like the rest of the software and the benefits it has provided to our workflow. It's just a rough edge, one of a few here and there that can be hopefully worked through in the future.

I posed exactly this question about 2 weeks ago to no avail. My current hypothesis on how to accomplish this is pretty tricky, and frankly speaking I lack the technical prowess to do it.

Here's how I think it could be achieved, if someone knows if it's possible I'm all ears:

- Shrink the island masks untill each one is a 1 pixel dot in the middle of every island. I know how to shrink masks, the trouble would be to stop when it's 1x1 pixel and not keep going and end up with a black mask.
-Then you'd sample a value on some kind of cloud noise map or something to grab a random value in the same spot as those pixel dots.
- Then you would expand those randomly lit pixels back up until they hit the borders of the original mask islands.

Pages: [1] 2 3 ... 9