282 lines
53 KiB
JSON
282 lines
53 KiB
JSON
[
|
|
{
|
|
"video_id": "1020102626",
|
|
"video_url": "https://vimeo.com/1020102626",
|
|
"chapter_index": 0,
|
|
"timestamp": "0:03",
|
|
"timestamp_sec": 3,
|
|
"title": "Template routing issue: subscription attributes not populating",
|
|
"summary": "A user explains that variables defined at subscription level work in composite routes but don't seem to carry over in template routes.",
|
|
"transcript": "Is to be an open question answer session, which means to change one of them, there is no set agenda. Although if no one has questions, I'll find what to talk to you about if if no one is bored yet. But, also, if someone asks a question and you have some feedback or you have a follow-up question on the question you're talking about or you are doing something similar or the same in your environment, unmute yourself or put something in the chat. It's supposed to be a discussion, not just question and then me answering and then going forward and so on. Yeah. I'm saying. That's that's the one thing. The other thing is that just just to keep thing ordered, if your next question is a follow-up on the current one, feel free to unmute. Otherwise, just let the discussion finish before you ask the next one.",
|
|
"is_demo": false,
|
|
"frame_description": null,
|
|
"source": "ask-annie",
|
|
"series": "ST Best Practices Q&A"
|
|
},
|
|
{
|
|
"video_id": "1020102626",
|
|
"video_url": "https://vimeo.com/1020102626",
|
|
"chapter_index": 1,
|
|
"timestamp": "0:54",
|
|
"timestamp_sec": 54,
|
|
"title": "Use case walkthrough: inbound trigger \u2192 pull from partner \u2192 push back",
|
|
"summary": "Clarifies the scenario: an inbound protocol flow captures identifiers, then uses them to pull a remote file and return it to the caller.",
|
|
"transcript": "And because Teams is good with actually notifying me that you have a raised hand, If you have a question and we're in the middle of another one, just raise a hand so I know you have a question. With that being said, let's kick it off. So welcome to the September ask Annie. I will also apologize for my voice if it starts dropping because I'm fighting some floor. I don't know what I managed to caught on my current travels. But with that, let's kick it off. Who wants to start? Oh, and I have a live server. So if I don't know the answer or if we want to look at something on the server, I can jump a server up on the screen and we can look at the live server. So I don't do a lot of PowerPoint on these meetings. So who wants to start it off? Joe posted a a question on Hi. So that one so George is asking to talk about the new standard cluster. So let's see. I also have a raised hand. So, Hans, let's start with you. And then Yes. And then Jort will go to the standard cluster and talk about what changed with the Postgre. That okay? Yes. Fine. Hello, everybody. I asked I sent an email, but I don't know where it is, about, a s two connections. We have new a s two connections. And my question question is, is it necessary to create an a s two application with an in and outbox or just make, like, for inbound, receive the message just from a s two transfer site to, to, advanced routing and automatically receive the files from the transfer site. Correct. So so you probably saw the site mailbox application and that tripped you up. That's an old application. You do not need it anymore. It used to be the only way to set up a s two, but now if all you need is to receive, then all you need to do is advanced routing. You don't need to do anything special. Okay. Super. Thank you. That's good. Okay. So and this is for this is for everyone. So we carry three old applications, standard router, basic application, and a s and the site mailbox. While they are working just fine, they're also what we call the old way to route. They are the unconditional routing kind of templates. So at the moment, the rule is if you can do it with advanced routing, go advanced routing. And there is only one use case that doesn't do that doesn't work with advanced routing, and this is part of the standard route. In the standard router, there is a capability when the the service account receives the files for them to be prefixed not with the name of the account, but with the subscription ID. This is the part and then distribute based on that. This is the part that doesn't work in work in advanced routing.",
|
|
"is_demo": false,
|
|
"frame_description": null,
|
|
"source": "ask-annie",
|
|
"series": "ST Best Practices Q&A"
|
|
},
|
|
{
|
|
"video_id": "1020102626",
|
|
"video_url": "https://vimeo.com/1020102626",
|
|
"chapter_index": 2,
|
|
"timestamp": "4:02",
|
|
"timestamp_sec": 242,
|
|
"title": "Guidance: terminology + next step via support case",
|
|
"summary": "Confirms template vs composite route terminology and recommends opening a support case to diagnose why attributes aren't passed.",
|
|
"transcript": "This is the part and then distribute based on that. This is the part that doesn't work in work in advanced routing. So if you have that use case, you need to use standard route. Anything else, advanced routing can do better. Yeah. I so I use advanced routing. Yeah. Good. Okay. So Thank very much. Standard absolutely. JORT standard cluster. Where do you want me to start with that? Well, what are the differences between the MariaDB cluster and the Postgres SQL cluster and how it reacts? Because right now, we we haven't installed it yet because there was a certificate problem. Mhmm. That's gonna be resolved in the next update, the October patch. Okay. We haven't had a chance to look at it yet. Okay. So I will start with the basic meetings, and then we'll go from there if there are more questions. The main difference is that we're changing the way the replication works altogether. So in the MariaDB world, you had two MariaDB's on each of the nodes. So let's talk about two node cluster. Same applies for the edges, by the way. So I'll just talk about one level, and we can clarify what needed.",
|
|
"is_demo": true,
|
|
"frame_description": null,
|
|
"source": "ask-annie",
|
|
"series": "ST Best Practices Q&A"
|
|
},
|
|
{
|
|
"video_id": "1020102626",
|
|
"video_url": "https://vimeo.com/1020102626",
|
|
"chapter_index": 3,
|
|
"timestamp": "5:17",
|
|
"timestamp_sec": 317,
|
|
"title": "Feature highlight: custom transcoding tables (demo)",
|
|
"summary": "Shows how to define custom transcoding tables at server level and reuse them across transfer profiles.",
|
|
"transcript": "And then any server was only working off of its own local database. And then every time you made a call, for example, create an account on one of the servers. If it was the primary server, it will write into its own database, then use the RMI call to go to to the servlets to go to the other server and tell it now write this in your own database, receive back a I got it, and then only then mark the adding as successful in the primary database. If the writing was in the secondary database, it would write it to the secondary, sent to the primary, primary the acknowledges, and then the primary sends back again as a distribution because it may have other servers other secondaries. So this was for any update in the database. Tracking table was only on the primary. As we all know, server locks were always local. And when a server job had to be done, the primary was picking up who to do it based on what it told the other server is doing. That was the model. It was application level only. The two MariaDB's didn't know about each other in any any way or form, and each server was working individual individually with its own database. With Postgre, we're basically changing the model completely. First of all, during a proper operation of the cluster, all the cluster nodes will work off the primary database. So both server one and server two will be connected live to the primary database for all of this configuration of anything in the database besides the server logs. So you don't have the primary secondary concept anymore. It's just a primary secondary database. That's one of the things. The replication, how the secondary is kept in in on on track is not to the application anymore. Instead, we we set up replications on the database level. So we have a new page. Let me share my screen because I was just looking at that a few seconds ago. Let me know when you see my screen. I can see it. So that is the September build, and this is how it looks. It looks very familiar. Right? Because it's basically just says Postgre instead of Maria. But now there is the replication settings here at the end. And if you have a cluster, which I don't have, it's a single one, you can start the database replication, set up a password, and set up here what parts of the database will be replicated. If there is a cluster, this will be filled in with the notes and so on. And this is how the replication happens. So that in the live operation, what happens is that server one and server two talk to database one only. It behind the scenes, except for the locks which stay local in the local database. So I'll just the ST server lock, the whole space just basically doesn't replicate, although it actually can be set up to replicate if needed. When something changes in the database, it's the Postgre application, the internal Postgre application that we had weaponized, basically, to use to replicate the secondary database. So the secondary database is essentially just sitting there, serving only locks when someone touches something. The sec the because of that, the secondary server actually have connection to both databases, but the live connections is only in the primary, which gives you on top of everything the queue management of the enterprise cluster now. Because in the old world, the queue was managed by the primary, while now each of the nodes actually can pick its own jobs again from the database because we remove the old way to distribute jobs. So in some ways, you can think that as a scaled down version of the enterprise cluster, but with an embedded database. And that's pretty much what it does. Anything we had been explaining how the enterprise cluster is better for distributing jobs now comes into the standard cluster as well. Because of that, there is additional requirement for more users. This is for the replication. There is additional requirement for more ports to be open so that the databases can talk to each other and so that the secondaries can reach the proper database. In a case of a fail off, if something happens on the primary and another server becomes a primary, the database on the current primary becomes the primary for everyone. So everyone in the cluster will join to the other primary, which also means that you don't have sync button anymore because there is no manual synchronization needed because you have only one valid database. So the automatic when the automatic failover happens, everyone will just rejoins to the proper database, and that's it. You don't need to do anything. If the what old primary shows up and becomes a primary gate, it will pick up again. So it's just a totally different model. It will take a little while to get used to it, by the way. But I'm I'm pretty sure that everyone is happy we don't have a manual sync button anymore. Mhmm. Yeah. It sounds like it will solve our in progress transactions that they keep floating. It's like the only way to get rid of them is to delete the subscription that where it's happening and then recreate it. And then, you know, then it goes away, but then it floats to another one. So we've we've been having that that problem. And Yeah. Go ahead. Yeah. In in the you know, it sounds like it's just minor and annoying, but, actually, at some point, it can stop the primary from actually doing anything, delegating anything. Yeah. Yeah. Yeah. When the primary is too busy with stuck jobs, it doesn't have any power to send anything to anyone else. In the new model,",
|
|
"is_demo": true,
|
|
"frame_description": null,
|
|
"source": "ask-annie",
|
|
"series": "ST Best Practices Q&A"
|
|
},
|
|
{
|
|
"video_id": "1020102626",
|
|
"video_url": "https://vimeo.com/1020102626",
|
|
"chapter_index": 4,
|
|
"timestamp": "11:24",
|
|
"timestamp_sec": 684,
|
|
"title": "Q&A: multiple transcoding tables supported",
|
|
"summary": "Confirms you can register and maintain multiple custom tables for different partner needs.",
|
|
"transcript": "Yeah. Yeah. Yeah. When the primary is too busy with stuck jobs, it doesn't have any power to send anything to anyone else. In the new model, there is no primary dispatcher anymore. So because everyone is basically on their own, well, not exactly, but it also is a lot less likely to kill your secondary by sending it too many heavy jobs while it's busy with app. So one of the problems with all the old standard cluster is that the primary doesn't really know how busy the secondary is. So if the secondary is on the brink of failing, in an enterprise cluster, the server knows not to pick up more jobs. Right? So it will just stop picking jobs for a while until it can self heal. In the standard cluster, this mechanism didn't work because there was no way for the primary to know that the secondary needs to heal. And when the primary was trying to heal, nothing was working. It was everything was blocked because there was no dispatcher. Now that changes. Part of the reason we did that is because we wanted to eliminate one of the databases. Part of it is we're using a very old library for the dispatcher model that we had to get rid of, and there was no update for it. And part of it is we know the other model works.",
|
|
"is_demo": true,
|
|
"frame_description": null,
|
|
"source": "ask-annie",
|
|
"series": "ST Best Practices Q&A"
|
|
},
|
|
{
|
|
"video_id": "1020102626",
|
|
"video_url": "https://vimeo.com/1020102626",
|
|
"chapter_index": 5,
|
|
"timestamp": "12:22",
|
|
"timestamp_sec": 742,
|
|
"title": "New capability: using SecureTransport as an S3 server (S3 commands supported)",
|
|
"summary": "Explains S3 access to the platform (list/create/upload, including multipart), available since an October release.",
|
|
"transcript": "and there was no update for it. And part of it is we know the other model works. And here is where I will actually warn people something because I don't know because now both servers are going to go against the primary database, this primary database will actually need more power than the old MariaDB database because you have two processing engine connecting to it. So if someone is planning something and you are low on resources on your primary box, that's the time to start thinking about a little bit more memory and CPU just to assist your database and IO and so on, just as heads up. So Is that part of the documentation, you know, installations? Like, well, you know, per files or per per load, you would need this many CPUs, this much RAM? Well, yes and no. We actually changed the install guide to have a different minimum than it used to be. So we actually updated the minimums to be to something that resembles a production server. So if you follow those, you are more likely to succeed. However, if you are an existing customer and you are on the brink of overflowing your servers and you are riding very close to your capacity, you probably should update your primary a little bit or your servers a little bit before going into the new cluster just to be on the safe side. It drops some of the load that was part of the dispatcher model, but you still have two servers working on the same database. So, essentially, your primary box now will run an SD server and the whole database for both servers. So You recommend that the Postgres SQL live on the primary server, or can it be Yes. Secondary",
|
|
"is_demo": true,
|
|
"frame_description": null,
|
|
"source": "ask-annie",
|
|
"series": "ST Best Practices Q&A"
|
|
},
|
|
{
|
|
"video_id": "1020102626",
|
|
"video_url": "https://vimeo.com/1020102626",
|
|
"chapter_index": 6,
|
|
"timestamp": "14:06",
|
|
"timestamp_sec": 846,
|
|
"title": "Account setup for S3 access: access key behavior",
|
|
"summary": "Covers how S3-style access keys are configured and that the strings can be defined by the admin (not necessarily generated by the product).",
|
|
"transcript": "So You recommend that the Postgres SQL live on the primary server, or can it be Yes. Secondary server or it could be a completely different server? So if it is completely different, it's called enterprise cluster now and you need the license for that. It it will live both on the primary and the secondary, but it will run off the primary and proper. So remember, you'll still have set up Postgre on each node. It's just that the second one is just getting replicating, waiting for a failover. Right? Okay. Yeah. It's so so that's the big difference between enterprise cluster and standard cluster that remains. In the enterprise cluster, you have a separate database. So the resources are just for the database on this server, while the servers themselves only carry the production the bulk the ST. While in the standard cluster, it's a database and ST on the same box, and whoever is the primary is getting also connections from the secondary. That's the main difference. So should make everyone's life a lot easier, especially because now you can see tracking tables from both places. And if someone has some tooling done that is chasing who is the primary so you can read the tracking table, you don't need that anymore. If you're doing onboarding, it doesn't matter which of the nodes you are onboard into because you don't have the old rigmarole of either a route on the secondary needs to send to the primary just to be sent back to me and so on. So efficiency wise, depending on your use case, we I've seen between 3050% improvement in throughput and speed. But this is as long as you don't kill your server, of course. Right? So Right. But it it's it's a lot more efficient model, and and how much improvement you will see in terms of speed and performance and so on really depends on your exact scenarios. There are things that just take time. But in terms of reliability, it's a lot cleaner model. Also, if you lose if the node need to restart because something went horribly wrong, it can self rejoin without even telling you. So it's it's a cleaner model. And the good news is that it's essentially reusing a lot of the capabilities of the enterprise cluster model,",
|
|
"is_demo": true,
|
|
"frame_description": null,
|
|
"source": "ask-annie",
|
|
"series": "ST Best Practices Q&A"
|
|
},
|
|
{
|
|
"video_id": "1020102626",
|
|
"video_url": "https://vimeo.com/1020102626",
|
|
"chapter_index": 7,
|
|
"timestamp": "16:19",
|
|
"timestamp_sec": 979,
|
|
"title": "Security posture: enabling S3 without broadly opening HTTPS",
|
|
"summary": "Discusses constraints where only specific ports/protocol exposure is allowed and the need to confirm the cleanest S3-only setup.",
|
|
"transcript": "And the good news is that it's essentially reusing a lot of the capabilities of the enterprise cluster model, which means that a lot of the heavy lifting on how to tune that kinks and so on had been done before that. So it's not brand new. So that's why we chose one of our databases and not a brand new one. K? Alright. Thank you. K. Anyone else? Any questions on that or anything else at all? Okay. Miguel. And I'll apologize again. Me and names. So for anyone I can't get on those ones, I try to read names. I mangle names. I apologize. You tell me how to call you. I try. Well, good. I think. Okay. So yeah. Yeah. I have a related question about the same migration from Postgres. Sure. The server log with MariaDB, both notes have different logs. They are not synchronized. Yes. With pods with Postgres, we will have a single one or we have two logs? Okay. Several logs remains non synchronized. Tracking table gets in one place only. So the logs remain local. Okay. Perfect. Thank you. Okay.",
|
|
"is_demo": true,
|
|
"frame_description": null,
|
|
"source": "ask-annie",
|
|
"series": "ST Best Practices Q&A"
|
|
},
|
|
{
|
|
"video_id": "1020102626",
|
|
"video_url": "https://vimeo.com/1020102626",
|
|
"chapter_index": 8,
|
|
"timestamp": "17:42",
|
|
"timestamp_sec": 1062,
|
|
"title": "Deployment architecture: standalone server + dual edge pattern",
|
|
"summary": "Describes a non-cluster setup using virtualization for scalability and separate edge nodes for internet vs extranet/VPN partner access.",
|
|
"transcript": "What else do we have? I have a big group now and no one wants to talk to me. Another question. Sure. Again. If we want to move from the standard cluster to enterprise cluster from Oracle to Postgres, It's procedure to do that, or we have to reinstall everything? What's the best approach in that scenario? You want to move are you enter you're enterprise Oracle now. You want to move to enterprise Postgre or standard Postgre? No. At this point, we have a customer with a standard cluster using MariaDB. That's right. And they want to use too much. Yeah. That's right. Okay. To enterprise, but with Postgres.",
|
|
"is_demo": false,
|
|
"frame_description": null,
|
|
"source": "ask-annie",
|
|
"series": "ST Best Practices Q&A"
|
|
},
|
|
{
|
|
"video_id": "1020102626",
|
|
"video_url": "https://vimeo.com/1020102626",
|
|
"chapter_index": 9,
|
|
"timestamp": "18:44",
|
|
"timestamp_sec": 1124,
|
|
"title": "Migration story: moving from a legacy gateway + rebuilding flows",
|
|
"summary": "Explains migration constraints (limited migration tooling), rebuilding configurations, and a passive gateway design where jobs trigger push/pull.",
|
|
"transcript": "That's right. Okay. To enterprise, but with Postgres. Okay. And they want to move to Postgres standard or Postgres enterprise? I'm not so sure about that. Okay. So okay. So let's explain it like that. If they are going from MariaDB standard cluster to Postgre standard cluster, to the new modernized cluster we were just talking about, they just need to do the update in place. This will happen for them automatically. They just need to make sure all the prerequisites are done, and this will happen. You don't need to do anything special. It's just an update. If they want to do to move to enterprise cluster, my recommendation is to go the path of going to the standard Postgre first. Because if you look on the screen I was just showing, see at the bottom? We actually once you are on this Postgres standard cluster, we can help move you to an enterprise and move you out. So there is because it's Postgres to Postgres now. Right? Okay. So there will be a button. However, this will mean that they'll need to take an outage until all all of the updates and database copies and what's not happened. So if they cannot take that, then what they need to do is to reinstall everything. If they reinstall, there are two imports which are important. The XML exporting port, which case account certificates, you know, all of the database objects, will work from one to the to another. So they can export all the accounts, import them on the other side, everything works as expected. System export, which is the one on the configuration menu over here. Oops. Yeah. My database doesn't like me. So the server import, the system import export does not work because you have different clusters. So if you have things that are not in the so if you have server configurations on the setup menu or on the authentication menu and so on, you'll need to reapply them on the other side. You cannot just move them over. So that's the only thing to be careful about. System export works only with the same server configurations? Same server, basically. We call it the same deployment because it can be used on the Doctor site as well if you have a Doctor situation. But system export import is only supported as a backup mechanism on the same server. Same configuration, same version, same server, same everything. Okay. XML import export is portable. You can get it from one server, move it to another As long as the new server is a newer version or the same version, you're good to go. Okay. Perfect. K. Thank you. Jeff, you had your head up hand up a few seconds ago. Yeah. So hey, Annie. So I I've asked them this question knowing that I only have about five minutes before I have to jump to a conflict for a little bit. Is there they kind of in the context of best practices, is there any tuning guides, parameters perhaps",
|
|
"is_demo": true,
|
|
"frame_description": null,
|
|
"source": "ask-annie",
|
|
"series": "ST Best Practices Q&A"
|
|
},
|
|
{
|
|
"video_id": "1020102626",
|
|
"video_url": "https://vimeo.com/1020102626",
|
|
"chapter_index": 10,
|
|
"timestamp": "22:07",
|
|
"timestamp_sec": 1327,
|
|
"title": "Upgrade strategy: Sunday maintenance + test-by-cloning production",
|
|
"summary": "Shares an approach to validate upgrades by cloning the production VM to simulate real volume and trimming history before DB migrations.",
|
|
"transcript": "parameters perhaps For that are more centered around, like, the admin UI. So, I mean, you'd you may or may not they've been aware of some of the challenges we've had is, like, the number of routes and and stuff that we have in our environment. The admin UI becomes quite slow. As as you're going in and out of subscriptions, routes, and stuff, and hitting save, it's taken, you know, some folks minutes. Yeah. So you can increase the memory of the admin. Just hit it with more memory, and you can give more memory to a database. That's pretty much all you can do with admin. There isn't that many things we can do with admin, unfortunately. Okay. So you said more memory in the admin process and Yes. And okay. And the database pulls for the admin just to give it more database connections. That's pretty much the standard set of things for admins. The other thing is we have had historically a few bugs with a lot of the the bigger object sets. So if you're not on the latest update, you might want to check that. And the usual advice, open ticket with support, tell them exactly where you're seeing the slowness. Because part of the challenge with r and d is that they literally cannot figure out where to look for those load queries until we hear from a customer. So whenever you see something going very, very slow, you know, I have 10 have thousand routes in the same account, and it's slow. Tell us because someone will need to look at that. As much as we're testing, you know we're not testing with a thousand on a daily basis. Right? No one is clicking any thousand things during QA. So we know it works with a thousand, but no one had done metrics. So that's why we, for example, improved the case where someone has thousands of subscriptions because it was also an operational problem there. But Gotcha. It just need to know that that that's the reality of it. And it's just a query based. And when you're opening the case, make sure you mention your database because as it turned out, some of those things are slower on one of the databases than the other just because of how the the database query is crafted. You know? MSSQL and Oracle are vastly different database engines. So Okay. So I also kinda have a somewhat related question to that, which is trying to kinda gather metrics in the environment around this. Outside of us just standing up like browser recorded testing, like a Selenium script to test out some of the UIs to get some baseline of tests, Is there any kind of mechanism or log that you might be aware of that kind of helps showcase, like like a basically, to to grab that metric. Right? Someone goes in, opens a route, hits save. How long does it actually take?",
|
|
"is_demo": false,
|
|
"frame_description": null,
|
|
"source": "ask-annie",
|
|
"series": "ST Best Practices Q&A"
|
|
},
|
|
{
|
|
"video_id": "1020102626",
|
|
"video_url": "https://vimeo.com/1020102626",
|
|
"chapter_index": 11,
|
|
"timestamp": "25:15",
|
|
"timestamp_sec": 1515,
|
|
"title": "Preview: new file tracking UI + navigation improvements",
|
|
"summary": "Shows the redesigned file tracking experience and how to drill into inbound\u2192outbound relationships.",
|
|
"transcript": "basically, to to grab that metric. Right? Someone goes in, opens a route, hits save. How long does it actually take? Not really. Not really. You can see what the audit the admin logs themselves and the Catalina out might have, but we don't really do any measuring like that. Gotcha. Okay. One more thing. While you are testing, I strongly recommend to run the same update through an API call instead of the UI and see how the speed is because that helps. So that's usually the first step of troubleshooting. Is the API behind the scenes flow, aka the database query, or is it just the admin not effectively getting the responses? Got it. Because it's a web application on top of a database with an API between them where exactly the problem is need to be isolated. So whenever I see a slowness on the API, my first thing is to try the same thing to the API. That's a really good recommendation. Okay. Because And if it was, like, shared infrastructure issues, like database or whatever, you would expect both of them to be slow. Right? Yes and no. Because in a couple of places, especially on the big sets, the database actually doesn't go to the API because it has a faster way to go. But if both of them are slow, it's obviously the database. If one of them is faster, it it just helps analyzing the situation. Because I can guarantee you, if you go to r and d or support from their r and d with this question, that's one of the things they want you to try anyway. So you might as well try. Yeah. Yeah. And, definitely, it's something to gather metrics for too. So Yeah. And it also makes it with APIs, it's a lot easier to get metrics from. Yeah. Which, again, doesn't help. And that's the other thing. Remember that we are working the so if it is about the routes, can you try the new better routes menu? I don't believe so. And I don't believe so because I don't believe I forget what version we're on, but I think the better routes UI, that only just recently came out. Right? It had been coming out in the last year or so slowly. So if you have it, so the does every version has more and more pieces? Even if you are not able to do everything, whenever you have a slowness in the main menu, check if the beta is there in the whatever you're doing is available Mhmm. And see if it doesn't work differently. Because it's it uses the same calls behind the scenes, but the UI component, the ones that sometimes are the delayers are different. So that's you know, just as a case, it's not ready to be used yet because there are missing pieces. Mhmm. But if you have it, it's a good test. If you're seeing very slow big slowness in the route packages to see if the beta is behaving people are better in your use case. Okay. So I know particularly the slowness the the screens we've seen folks have slowness on is under the accounts in the routes and subscriptions. That one shot.",
|
|
"is_demo": true,
|
|
"frame_description": null,
|
|
"source": "ask-annie",
|
|
"series": "ST Best Practices Q&A"
|
|
},
|
|
{
|
|
"video_id": "1020102626",
|
|
"video_url": "https://vimeo.com/1020102626",
|
|
"chapter_index": 12,
|
|
"timestamp": "28:19",
|
|
"timestamp_sec": 1699,
|
|
"title": "Monitoring challenge: linking multi-subscription trigger file patterns",
|
|
"summary": "Highlights why end-to-end tracking is harder when one subscription triggers another (empty file trigger \u2192 pull \u2192 store \u2192 second subscription).",
|
|
"transcript": "under the accounts in the routes and subscriptions. That one shot. Yeah. For that one, increase I don't think there's a beta UI for that yet. Nope. Right? No. No. We started with the templates. Okay. In which case Got it. Check the API as well, and as I said, talk to r and d, but to talk to support. Part of the challenge with this specific UI over there is that in order for us to be able to show you what we want to show you, we literally send the whole account object out in the request. And when you have, you know, 100 routes and 10,000 subscriptions or whatever volumes you have these days, it's a lot of objects to run through. And we're doing a lot of additional queries all over the place, and it works beautifully and very fast if you get five of them. Not that fast on a 100. So Oh, right. Right. Okay. That answered my question. Thanks, Annie. Okay. Good.",
|
|
"is_demo": false,
|
|
"frame_description": null,
|
|
"source": "ask-annie",
|
|
"series": "ST Best Practices Q&A"
|
|
},
|
|
{
|
|
"video_id": "1020102626",
|
|
"video_url": "https://vimeo.com/1020102626",
|
|
"chapter_index": 13,
|
|
"timestamp": "29:09",
|
|
"timestamp_sec": 1749,
|
|
"title": "Introducing Workbench: fleet health + end-to-end monitoring",
|
|
"summary": "Presents Workbench as a monitoring/administration layer for multiple MFT nodes with health status indicators and centralized visibility.",
|
|
"transcript": "That answered my question. Thanks, Annie. Okay. Good. Okay. I have a question a couple of questions in the chat. Performers, what is the best part? So that's Brian, Justin, I think. Sorry if I missed the the order. The best practice around how much compute resources should be allocated to the server based on the load. So this is a little bit like the egg and chicken question because load for one person is different for another. So we do have the capacity guide, which has a very nice table showing key what exactly we had been testing with to get specific results from where we can interpret. What we also know is that there are operations on the server, for example, PGP, which are very CPU and memory, exhaustive. So depending on what you are doing exactly, you might need to play with that. So the way I would start when I'm doing resources is, first of all, as I mentioned, the install guide was updated. So we have a new minimum requirement for both CPUs and memory. For example, on the server, the absolute minimum will require now is four CPUs and 60 gigabytes of memory, but this is for a very slow small systems. Still production, but small. As long as if you are in a proper production environment, I would usually recommend doubling that. So four to eight CPUs with 32 gigabyte of memory is a great start for pretty much any environment that is not on the very small scale. Obviously, if you're going to go million, like, million files per day, then you need to go a little larger. So it's that's where it starts. So do you have what kind of transformations are you doing? So what happens to your files when they are coming? Brian? Yeah. That's perfect. And I'm also I'm also on the same team as Jeff at Wells Fargo. Okay. And so, yeah, essentially, there's there's all sorts of, you know, just like you said, different types of loads, which is great to account into the consideration, which I wasn't accounting for. So that's great. And I'll definitely have to keep an eye on that. And I really do appreciate that you noted that that stuff has been updated in the install guide. So I'll be sure to go ahead and check those out for further details, but that's that's a great idea and a great starting point for us. So thank you. Yeah. And because I happen to know your environment very well, all of them, actually, just read whatever and then double it and then start thinking again. It's my recommendation for your environment. You're on the bigger scale, always had been. So other from that, a couple of things to remind to mention, especially if you're building new environments, do baselines. Figure out how long something takes to the server and run it carefully or at least once a day to see if it's still behaving. You you will never know if the server is getting overloaded if you don't have a baseline to compare to, and this bass baseline need to be cleaned. Right? You cannot just compare one file today to another tomorrow. They have nothing to do with it in terms of where it's coming from or where it's going to. The other thing, in your kind of environments, and this is very large environment, but there's also it's valid everywhere. For the last maybe ten years, in my experience, the biggest bottleneck always ends up being the file system we're putting the files onto. It's not ST itself because we had been increasing the capacity. It's usually the NFS or whatever you you are using behind it. So or the network separation point or something similar like that, something outside of ST. Because people tend to",
|
|
"is_demo": true,
|
|
"frame_description": null,
|
|
"source": "ask-annie",
|
|
"series": "ST Best Practices Q&A"
|
|
},
|
|
{
|
|
"video_id": "1020102626",
|
|
"video_url": "https://vimeo.com/1020102626",
|
|
"chapter_index": 14,
|
|
"timestamp": "33:24",
|
|
"timestamp_sec": 2004,
|
|
"title": "Predictive alerting question: late/missing/abnormal file detection",
|
|
"summary": "Asks about anomaly-style alerts (late, missing, size anomalies) and is directed to Workbench capabilities/roadmap via account team.",
|
|
"transcript": "So or the network separation point or something similar like that, something outside of ST. Because people tend to make their ST very sturdy and make it a huge application and then forget that we actually need to talk to the file system, and we need a network in and out, and we need access to the LDAP if you are doing LDAP for authentication and things like that. So whenever you or if you use iCAP for antivirus, if your iCAP engine can only handle three files per hour, I have some news for you. Your OST will not be able to handle more either because we need to wait on them. Right? So Absolutely. Things like that. Things outside. And talking about that, the one thing that I would point out is archiving. So we did a huge amount of work on getting our archiving to work a little better. For the ones that are not familiar, archiving is a feature you can enable in ST, which while we're writing the file on the file system, we write a second copy in the archive folder, and this is what is used for REST submit. So it's a very nice thing to turn on. But until half a year ago, we recommended big environments to actually disable it because it was a hack on resources. We finally fixed that. So now it's we actually recommend it to be enabled till now you can have resubmit cleanly. But please, please, please don't put the archive folder on the same shared storage where your file home folders are, because I've seen people doing that. First of all, archive in the same folder as the actual storage makes no sense whatsoever, operation.net. But the bigger problem is that now we have a double IO operation during the receiving of the file going on the same disk that is already struggling.",
|
|
"is_demo": false,
|
|
"frame_description": null,
|
|
"source": "ask-annie",
|
|
"series": "ST Best Practices Q&A"
|
|
},
|
|
{
|
|
"video_id": "1020102626",
|
|
"video_url": "https://vimeo.com/1020102626",
|
|
"chapter_index": 15,
|
|
"timestamp": "35:09",
|
|
"timestamp_sec": 2109,
|
|
"title": "Next sessions + how to submit questions in advance",
|
|
"summary": "Encourages sending questions ahead of future virtual sessions for better coverage.",
|
|
"transcript": "But the bigger problem is that now we have a double IO operation during the receiving of the file going on the same disk that is already struggling. So don't do that. And just as a base rule, if you have two storages and one of them is better than the other in terms of resources, IO, memory, you know, use the better one for home folders, the worst one for archiving. Archiving has a single write during the receiving of the file and a single read if and it's only happening a read only if there is a resubmit needed. So",
|
|
"is_demo": false,
|
|
"frame_description": null,
|
|
"source": "ask-annie",
|
|
"series": "ST Best Practices Q&A"
|
|
},
|
|
{
|
|
"video_id": "1020102626",
|
|
"video_url": "https://vimeo.com/1020102626",
|
|
"chapter_index": 16,
|
|
"timestamp": "35:40",
|
|
"timestamp_sec": 2140,
|
|
"title": "Invitation to in-person user group + agenda highlights",
|
|
"summary": "Discusses attending the in-person event and what topics will be covered.",
|
|
"transcript": "So makes sense? Graham? Yeah. That's excellent. Thank you so much. Okay. Talking about resubmit, how many people saw in the August release that we have a multiple resubmit through the UI. My new server died, so I cannot show you. But you don't if you have more than one resubmit to be done, we actually get the and it's it's not an API based, unfortunately. They're looking into it. But as long as you can get them on the same page on the UI, you can hit a single resubmit button and resubmit multiple transfers at the same time now. No one cares. Okay. I had mentioned it. Okay. Anything else, Brian? No. That's it for me. Thank you so much. Absolutely. Okay. Let me give me a second to see just something. And I saw the other question is about the port changes. The release notes has the exact requirements for the disk to be successful, and it has a couple of links as well. So I'll start start with the September build release notes, Jacob. Okay. I'll take a look at that. Yep. And, also, we published somewhere on the support site an article explaining some of the differences. I don't have a handy link at the moment. If you cannot find it, I'll see if I can find it so we can I we can send it later, but it's pretty straightforward? From your perspective, as so if you look through the release notes, it will tell you you need to create additional OS level user, for example, for the replication to work and things like that. But it is listed as the prerequisite for the update on the September release. And it's very, very clearly spelling out if you're on Windows, if you're on appliance, if you're on Linux, do this, this, this. So, it it's pretty straightforward, at least which ports need to be opened, who needs to talk to whom. And so on.",
|
|
"is_demo": false,
|
|
"frame_description": null,
|
|
"source": "ask-annie",
|
|
"series": "ST Best Practices Q&A"
|
|
},
|
|
{
|
|
"video_id": "1020102626",
|
|
"video_url": "https://vimeo.com/1020102626",
|
|
"chapter_index": 17,
|
|
"timestamp": "37:52",
|
|
"timestamp_sec": 2272,
|
|
"title": "Migration effort reflections: scale + partner outreach",
|
|
"summary": "Notes the practical reality of migrating hundreds of partners and the time/coordination it requires.",
|
|
"transcript": "who needs to talk to whom. And so on. And don't forget that this also applies for the edges because we're changing the model, but we're also replacing fully the database. So when you update your even if your enterprise cluster, if you update to the September or later release, your edges will now be on Postgre. So if you have replicating edges, which means you had to enable the synchronization between them or the replication, They will follow the same model as the sec as the servers. Everyone uses the database on the primary edge. If they are nonreplicating, then, obviously, they they stay still. So and but here is also the conversation to be had if it makes more sense now to make them replicating given if they were not before. So there are some choices to be made on the edges, but you have time for that. But don't forget that September or later, we'll change the database on the edges, and you need to follow the guide, and it's not as straightforward as run the double installer. The September installer is also a double. So we first update the the installer and then the ST as we sometimes do. But the prerequisites are pretty severe because it they require a new OS user and a couple more things. So just heads up. Read your release me. It's always a good thing to read the release file, but this time is crucial. If you don't, your update will fail. Don't ask me how I know. Okay. So I was reading through the read me file, and I I didn't see anything about the ports or any of the other package changes. Hold on a second. Let me just open it very quickly and see if I still don't have it open, by the way. Okay. I have it open. Let me see what it says. Free space required, temporary space required. Two OS packages need need to be updated, libxml two and libxsd. The POS will require additional non root user to operate. And you are right. The database ports are not here. Okay. Let me go and find the article then. Give me a second. Kick me out. I'm kind of blind with Teams. Oh, yeah. I know. I know. I'm sorry. No. No. No worries. But I was wondering if you were sharing something. I No. I heard that you were kicked off, but Yeah. You know what? Picked out, not kicked off. No. That's okay. It was just me looking on another screen for a second here. I apologize. Jacob, let me find this article because I know I saw it somewhere. I just don't reme oh, it's in the admin guide. Look at the new install guide and new admin guide. That's the answer, actually. We updated the documentation. That's why I cannot find an article. So because starting from September, this is the new land, the the the new rule, it basically the new requirements are listed in the two guides. The install guide for initial setup and then the admin guide that describes everything. We replace the old information with the new information. So let's see if I can bring that up. Docs, axe, wait. Let me share my screen. Hold on. Share. Share. Share. Can you see my screen? Yep. I do. So starting with the getting started, initial configuration, server checklist.",
|
|
"is_demo": false,
|
|
"frame_description": null,
|
|
"source": "ask-annie",
|
|
"series": "ST Best Practices Q&A"
|
|
},
|
|
{
|
|
"video_id": "1020102626",
|
|
"video_url": "https://vimeo.com/1020102626",
|
|
"chapter_index": 18,
|
|
"timestamp": "43:17",
|
|
"timestamp_sec": 2597,
|
|
"title": "Suggested future topics: feature-focused sessions (e.g., S3)",
|
|
"summary": "Requests more capabilities/new features sessions and confirms those topics are planned for future events.",
|
|
"transcript": "I think all of the ports should have been updated over here as my post gray. Where is my install right now? Hold on. Right. Okay. It's not my regular browser, unfortunately. So as you can see, I'm still going a little bit to find what I need. Give me a second. Guys, let me just",
|
|
"is_demo": true,
|
|
"frame_description": null,
|
|
"source": "ask-annie",
|
|
"series": "ST Best Practices Q&A"
|
|
},
|
|
{
|
|
"video_id": "1020102626",
|
|
"video_url": "https://vimeo.com/1020102626",
|
|
"chapter_index": 19,
|
|
"timestamp": "44:21",
|
|
"timestamp_sec": 2661,
|
|
"title": "Wrap-up and goodbyes",
|
|
"summary": "Confirms no further questions and closes the session.",
|
|
"transcript": "So prerequisites for Linux. And over here, things had been updated completely now with the for the new databases. Was 30 environments, post grad databases. This is for enterprise clusters, and then the admin guide will have something. So which what you're looking for which is which ports we're using for the conversation now. Right? Yeah. Which ports and potentially the the the the article that I have that I've been going off of has it has the ports and it says the new packages. It doesn't say which Oh, yeah. These are the two from the release guide. So the release guide have the two packages required. If you look, it's it's a couple of lines into the note over there. The there are just two of them. If you are coming from a new enough version of MariaDB, so if you're in the last three months, we already updated before the the the postgre requirement. But if you're coming from a lot older, they're in there. But, basically, if you look at the screen over here on the requirement for installing on Linux, those are the ones that we require require and the two that are really required for Postgre disguise. The LeapXML two and the LeapX SLT. Gotcha. So what happened when we dropped the September release is that we updated the admin guide and the install guide to be for the September release specifically now. Although in a couple of places, there are notes if you're on older versions, you need to do that and so on. So see how for The US user route. For example, now we have this line for the PostgresQL requires a new new non route user. This is also in the release notes. So if you're starting brand new, you will already know that. But if you're coming from an old one, that's why it's in the update. Makes sense? Okay. And then on a semi related note Mhmm. One of the ports that's no longer required is the four four four port. Between the two servers only. What was that? Oh, between the two servers. Gotcha. You still need it to run because that's where your admin UI runs. But back in the day, we're using it for the servlet. Remember at the beginning of the call when I was explaining how we exchange information to the RMI and the servlet, That was the part used for that. Okay. Gotcha. So that that's why it dropped. But you still need the part itself. And if you go at the very first page on the documentation, there is a very nice checklist of all the parts you need that we are I started. So if you go to docsactuary.com prod MFT product, secure transport, there's a checklist with the ports. It basically lists everything you need. So we don't drop any port per se. We actually need to add ports, but we don't need the four four four or eight four four or whatever you're using for the admin UI to be open between the nodes because the two admins don't talk to each other anymore. They just talk to to to their databases. That's the big difference. Okay. Gotcha. Thank you. Yep. Okay. The one port that is not in use anymore is the one down on if you are down on the server, there was another port called four four four three or something like that. That's the one we don't need anymore. The TM communication, the two TMs talking to each other. The RMI port, that's the one that drops, but not the four four four. Makes sense? Yes. Thank you. Okay. I'm seeing a raised hand. Jean? Good morning. Along that line, the document that you just I've been going through the getting started guide because we're installing, net new Yep. To our dev servers now. What was the document that you just mentioned that lists all the ports? Because I've been trying to go through and and get all my ports so I can get my firewall guys to open them. Okay. And I've just run into yet another port that I did not have. Okay. So go to It's a little frustrating. Yeah. Can you see my screen? So from the docs portal, manage file transfer, secure transport. Getting started. And start working with the the initial configuration. And over here, there is a checklist. Mhmm. And over here are a gazillion and 17 ports for different things. Now Okay. Do you see seven the one we just ran into now is 7475. Oh, this one. Do we know about that one? You do? I didn't. ACN somewhere else at 7475. Right? Yeah. And it's in the it's in the admin guide I just found. So I found a whole lot of ports, but is there a place that will list everything I need to begin with? Because it's certainly not wasn't fully in the getting started guide. Yeah. We have a workbook, and I don't know if it's it's not public, though. When when PSO are doing installs, we have our all the old workbook where we basically send it over. I am not sure what happens when someone tries on their own. Let me take a note, I'll check, and I'll get back to you. So the answer is we know the information. I don't know if it's somewhere on a public place, so I just need to check. Oh, okay. Thank you. I'll I'll let I'll let Byron know that too, by the way, so we can maybe get Yeah. So because this is frustrating. Drop me you know, drop me a note if you don't hear from me in the next couple of days if I get distracted or if I if people don't respond to me and I forget to chase them. It's Yeah. I sure will. It's I know that we have the workbook because we were just reviewing it, and I'm pretty sure we have the list somewhere in a public place as well. I just cannot it's probably somewhere on the support side. I just need to dig it out, and they don't want to spend half the call digging out. And yeah. You know, work case scenario, I'll just send you the workbook anyway. Perfect. Thank you. Okay. But let me also verify internally and see what we want to do with that. That's the more important part. So but yes. And there is also a couple of high ports that are not listed there. They're in in the 9,000 range, nine six something or another. You're you're still going to install with Oracle. Right? Correct. Yeah. There are a couple over there for the caches that we're using for the caching that are the range for the cache that also get missed occasionally. So ping me. I'll I'll get you the complete set of ports, and then we'll figure out how to get that published somewhere. Perfect. I'll email you now. Mhmm. Okay. Okay. What else do we have? No? Okay. Had anyone been playing with the new route packages by any chance? Hey, Annie. Hey. Oh, yeah. Not related to what you've literally just said, but I was just trying to find the unmute button at the time. So we're seeing we're we're on a we're we're on 5.5, but we're on a much older patch level. We're we're still in 2022, so we're we're quite behind the times in that respect. But we're seeing issues with when we hit the file tracking button to look at the file tracking log, it can it's taking sort of it can take sort of, like, twenty seconds or so, sometimes a bit more to actually come back with the results, but the server log most instant. Yes. How so couple of questions. What kind of database do you have? It's an external SQL database. Okay. How many days do you keep in the tracking table? So in the file tracking, we got ninety days. Ninety? Nine zero? Yeah. Nine zero. How many files do you have per day? Not that many considering what some of these guys on here probably do. I think we well, I do per month, it's about half a million, so it's not actually that many per day. So Yeah. It was. So your problem is that you're keeping too many days. So the way our this data bay the the file tracking database is not really designed to keep",
|
|
"is_demo": false,
|
|
"frame_description": null,
|
|
"source": "ask-annie",
|
|
"series": "ST Best Practices Q&A"
|
|
}
|
|
] |