1050 lines
78 KiB
Plaintext
1050 lines
78 KiB
Plaintext
# Transcript: 1027626460
|
|
# URL: https://vimeo.com/1027626460
|
|
# Duration: 5017s (83.6 min)
|
|
|
|
[0:02] Mail,
|
|
[0:03] which means that I actually know what I'm talking about,
|
|
[0:07] which doesn't always happen.
|
|
[0:10] No. No. Or or better to say sometimes
|
|
[0:14] I need a few minutes to collect my thoughts,
|
|
[0:17] you know, normal. While if I have questions beforehand,
|
|
[0:20] I've done my digging earlier.
|
|
[0:22] So Hans questions are about a s two.
|
|
[0:26] So I'll jump on the server while do you want to tell us your questions, Hans, or I can? Up to you.
|
|
[0:32] Yeah. That that I have two questions. The first is the I s two connection itself.
|
|
[0:38] When I go to the transfer side,
|
|
[0:41] I
|
|
[0:42] can't
|
|
[0:46] set set a certificate if a client like to authenticate with a certificate. I just can set the user and password.
|
|
[0:55] Yep. Give me a second. I just need to get on my VPN, which dropped on me. So let me get there. So the reason for that is because you are thinking like you think for.
|
|
[1:07] However, a s two in this regard is the same as with
|
|
[1:11] the other protocols.
|
|
[1:13] So a s two is this medium.
|
|
[1:15] So on the outbound,
|
|
[1:16] when you are sending the files, you need to specify which certificate to use. But the on the inbound,
|
|
[1:22] you cannot specify implicitly,
|
|
[1:25] but you can enable it. So what you need to do Okay. Is step number one, you go to the a s two server on the server or the edge, wherever they are connecting. Yep.
|
|
[1:36] Come here on the settings,
|
|
[1:39] enable SSL,
|
|
[1:40] and over here
|
|
[1:42] is where it says client certificate.
|
|
[1:45] Whoops.
|
|
[1:46] Enable SSID. I said sorry. My server is not listening to me. And over here
|
|
[1:52] over here where it says s l key s client certificate,
|
|
[1:57] you can say either optional or required, which will tell that inbound connections on a s two to that daemon
|
|
[2:03] either will allow certificates
|
|
[2:06] or require them.
|
|
[2:08] If you want to make it required for some user, non required for another, what you need to do is to to create two a s two servers.
|
|
[2:16] One of them with required, one with them without on different ports, and give the correct port to your partner in the relationship and to them where to connect.
|
|
[2:25] Once you have enabled that,
|
|
[2:27] all you need to do is to go to the account
|
|
[2:31] and add the certificate
|
|
[2:34] into the login certificate store, the public part of it, like you will do for an SH key or for users to you. That's all you need to do. That's why I said you were overthinking it a little bit because, yes, in Passit,
|
|
[2:48] we need to specify both certificates, both the partner and the login. But a s two is a much older site, so it's designed based on the older paradigm,
|
|
[2:57] which is that inbound don't go to the site. But for a s two, we need to because
|
|
[3:02] of just how we brought cohorts.
|
|
[3:04] But we still try to keep as much out of the site as possible. So that's all you need to do.
|
|
[3:09] You basically enable it
|
|
[3:12] on the protocol view, and then you just put it as a login certificate. And then this login certificate can be used on any protocol they have.
|
|
[3:20] That's it. Like like an SFTP?
|
|
[3:23] Yes.
|
|
[3:24] Yep. They will be able to use the same key or the same the key inside of the certificate because as you know, message is just but yes. Yes. Yeah. That's clear. Okay. I I just
|
|
[3:34] the enable of the
|
|
[3:36] for the certificate on the server side. That's that's the key that's the key message for me. Okay. Yes. And it is for and this is how it works for all the protocols. Right? Yeah. Because by default, we only ask for a password. If you want to enable certificates
|
|
[3:50] or keys, you need to enable on the protocol link.
|
|
[3:53] But and but it is on the protocol, not on the daemon.
|
|
[3:57] So you can have two separate protocol listeners, one requiring one not. Of course. Yeah. It's just
|
|
[4:03] so I I know the other thing that probably tripped you a little bit is that when you go to
|
|
[4:08] let's just go there for a sec.
|
|
[4:12] When you go over here,
|
|
[4:14] you can have you you can see them both while on the receive option, it's required only on the requirement. It's only for password.
|
|
[4:21] Exactly. This is because the requirement for certificate needs to run before the site is found. That's why it needs to be on the demo. Because
|
|
[4:31] it it needs to tell the other server, I require that. Give me that. These are my
|
|
[4:36] authentication methods. So it it looks a little disconnected
|
|
[4:41] in a way,
|
|
[4:42] but it's more flexible.
|
|
[4:44] Because this way, you can have as many certificates and you don't need to reference a certificate here. So if they they if they need to change the certificate,
|
|
[4:53] you just need to import it into the login certificates.
|
|
[4:57] Mhmm. Mhmm. Yep. Okay. Yep. So Alright. Thank
|
|
[5:01] you very much. And another a s two thing
|
|
[5:05] I have, we always have discussion with customers
|
|
[5:10] about what they're sending, what they're not sending. So what I like to have is in
|
|
[5:16] the lock
|
|
[5:18] the HTTPS
|
|
[5:20] or HTTP header.
|
|
[5:21] Can I see this in the lock?
|
|
[5:24] They are not in the locks, but they should be in the tracking table. So when you have an AS two transfer, if you click on it Yep. Yep.
|
|
[5:32] There is the whole protocol
|
|
[5:35] communication,
|
|
[5:37] and the header should be part of it.
|
|
[5:39] All the header. Okay. Yeah. Should be. So and this is where I and I couldn't test it because I didn't see your question until this morning, unfortunately. Time different doesn't help.
|
|
[5:49] But
|
|
[5:50] so technically speaking, all of those additional parameds, everything that we have as part of the communication,
|
|
[5:57] we put inside of the tracking table lock.
|
|
[6:00] Yep. So if you can look it there, if something is missing.
|
|
[6:05] Yeah. Open a support ticket. Let's see what might be missing. I don't think that we are actually filtering anything.
|
|
[6:11] If this is not enough, if you open the TM lock for j and the a s two lock for j,
|
|
[6:18] you can find
|
|
[6:19] the especially in the t m one in the a s two one, you can bump some of the lowers to the bug, but this will increase the lock level and finding things in the bug is a pain.
|
|
[6:29] So Yeah. Yeah. I would send you the tracking table. So we see this
|
|
[6:33] yeah. Go ahead. I I normally do it this way. I I just change
|
|
[6:39] the the the TM lock or the the a s two lock or something like that. But I didn't know actually if if I have to add a class or something like that,
|
|
[6:48] which class I have to lock, for example.
|
|
[6:50] Yeah. That's part of the problem is that I won't really have the complete list of it per se.
|
|
[6:57] So, send me a tracking table, see if it might not be there. Okay. Yeah. I do it. And if
|
|
[7:03] not, my next step will be for the book on the a s two. Basically, drop drop go to a s two a lot for j and just put everything in the book over there, just temporary,
|
|
[7:14] and see what will produce the best of it, and then start dropping them down.
|
|
[7:18] Yeah. Yeah. Okay. Okay. But the the idea with the tracking table, that's why we added the protocol commands in all of the entries so that you shouldn't need anything else. Like, in SSH, we have the complete SSH conversation.
|
|
[7:32] In a s two, we have the complete HTTPS conversation, which should contain the headers, but
|
|
[7:38] I
|
|
[7:39] just just check.
|
|
[7:41] Okay? Yeah. I checked it. Unless they're doing something weird. Okay. Yeah. Okay. Thank you very much for explanation.
|
|
[7:47] Thank you. Absolutely.
|
|
[7:49] Okay.
|
|
[7:51] Chad, I saw
|
|
[7:53] I thought I saw someone's hand earlier.
|
|
[7:58] Cam, was that you?
|
|
[8:00] I didn't put my hand up, but I do have a question.
|
|
[8:03] Was Drawed, I believe.
|
|
[8:06] Okay. So come go ahead, and whoever else was that, please raise your hands.
|
|
[8:13] Thank you. Hello, everybody. So my question is, more on a cloud side, and I'm just curious of your opinion, Annie.
|
|
[8:21] So,
|
|
[8:21] the way my company is going in the cloud is that they are not planning to put anything
|
|
[8:27] in the and I'm sorry. I'm speaking this from AWS.
|
|
[8:31] They are not planning to put anything in the public subnet except for load balancers.
|
|
[8:36] So they have asked me my I have a on prem installation that's the standard
|
|
[8:42] DMZ or edge servers in the DMZ
|
|
[8:45] and then internal servers on our network. And they've asked me, do we when we when you go to the cloud,
|
|
[8:52] do you need to have the edges?
|
|
[8:55] Could we just install the internal, you know, the servers in the private subnet
|
|
[9:00] and then manage the traffic through the load balancers?
|
|
[9:04] You know, I did open a ticket with support because there is an admin guide on cloud implementation.
|
|
[9:12] And so support came back and said, hey. Our guide is just a rule. You can do whatever you want for an install, so you don't need the edges.
|
|
[9:22] Oh, you do. Okay. You do. So and the reason is pure technical. So in a dish so what the the biggest reason to have them, obviously, is to go through a DMZ. Right? We all know that. But they have a secondary function, which I actually will call the better function for them, actually.
|
|
[9:39] So let so let let's look at the transfer. You have this guy that you're sending your you have this big partner that you're sending 10,000 files. They go on h one, you know, because the load balancer recognize it. Oh, that's your IP, so I'm sending it to one of the edges or direct to the server.
|
|
[9:55] And then this edge
|
|
[9:58] so if you go directly to the server, that means that all of the processing or at least the initial initializing of the processing will stay on this server.
|
|
[10:06] And because it's enterprise cluster, we'll try to keep it there.
|
|
[10:09] So this server is getting a little busy while its partner might be just sitting there and waiting maybe something to show up. But all of the inbounds go on the same server. Now if you put an edge,
|
|
[10:20] the edge is actually an intelligent beast. So as long as those files are coming in separate
|
|
[10:26] sessions,
|
|
[10:28] and usually when you have that many there, usually they open at least a few, the edge actually can split the sessions from the same IP.
|
|
[10:36] So on the load balancer,
|
|
[10:37] all you can split is based on the IP. So if you have one partner or you have a big company that is nothing there at the outbound, so they they look like they are the same IP,
|
|
[10:47] they hit one server, they stay there because we cannot redistribute.
|
|
[10:51] Right? And the load balancer cannot do anything because it doesn't know it. You need the sticky session. They need to stay on the same protocol. Mhmm. And this protocol can only talk to its local TM.
|
|
[11:02] But when you have an edge in front of the whole conversation,
|
|
[11:06] this edge is actually session able,
|
|
[11:09] and it also receives information from the server which server is busy. So it knows that, oh, this server is under a huge load. I'm not going to send any more down. So it will actually use the least used one
|
|
[11:22] when possible.
|
|
[11:23] So this is the big thing with the edges. Forget about the DMC and so on because, yes, in AWS,
|
|
[11:29] it doesn't really make much of a difference unless you want to put it in the public part, and usually people don't. It's the redistribution
|
|
[11:36] of the load and the protection of the back ends. And not even the redistribution as much that I like, but the reason I always tell people you want edges is the protection of the servers.
|
|
[11:49] The fact that a server can tell the edges, I am very busy. Leave me alone for a second, and the edge will prioritize the other servers
|
|
[11:58] and try to protect.
|
|
[12:00] Makes sense?
|
|
[12:01] Yes. Thank you. And it is session based. So if you have one partner at all so if you have someone opening a single connection and just dropping 10,000 files one after another, this will stay on the same server. We cannot split that. It's one session.
|
|
[12:16] But it's the cases where you have the multiple sessions, which a load balancer cannot split, same IP,
|
|
[12:22] but the edge can't.
|
|
[12:24] So plus and this is from operational perspective,
|
|
[12:28] splitting
|
|
[12:31] splitting the
|
|
[12:34] protocols from the TM and leaving only the TM and admin down on the server means that no one is competing with TM for resources, which I also like. Of course, you can always get progressively bigger servers,
|
|
[12:47] but resources are resources that we all know TM can get boggled. The less Java processes running at the same time alongside it, the better.
|
|
[12:55] So I still do think edges are re required.
|
|
[12:59] That's part of the reason why if you look at our cloud edition that, you know, not out yet, but we are working heavily on it, it actually split the TM on its own.
|
|
[13:07] So over there, there will be no concept of edges anymore,
|
|
[13:11] but the protocol
|
|
[13:12] pods are outside of the TM pod in all all in all cases.
|
|
[13:19] So Okay. Thank you.
|
|
[13:21] Okay.
|
|
[13:23] I have a couple of quest any other questions come or someone else on that?
|
|
[13:28] So the answer come is I strongly recommend edges.
|
|
[13:32] Doesn't mean you have to have them, but there is technical,
|
|
[13:35] security, and operational reasons to have them. So
|
|
[13:43] okay?
|
|
[13:45] Yeah. Thank you. I don't have anything else on that. Thanks so much.
|
|
[13:49] Okay.
|
|
[13:50] Question from Joel White.
|
|
[13:53] Best practices relating with transfer efficiencies. Folder are folder monitors better
|
|
[13:58] than SMB or other type of transfers using routes?
|
|
[14:03] So, Joel, it really depends on where things are. Folder monitor is extremely efficient if your story if you are local.
|
|
[14:12] Part of the challenge
|
|
[14:13] so SMB is a lot less efficient because for every SMB connection, we do what is essentially a dynamic in memory mount per file, and we cannot reuse those.
|
|
[14:27] So if the choice is between folder monitor and SMB, folder monitor will always beat performance wise
|
|
[14:34] no matter what.
|
|
[14:36] But folder monitor requires attached storage, so it needs to be physically attached to the same server.
|
|
[14:41] So and there is one use case where SMB might be a little faster,
|
|
[14:47] and and it is when you have a very low number of files and very bad IO on your folder monitor, wherever you are monitoring.
|
|
[14:55] But for the most part,
|
|
[14:59] it's if you cannot touch it, that's better. Local to an internal network or just one server local to an internal network. If you can't attach it, we can monitor it. Unless you're on Windows,
|
|
[15:10] where we can monitor ULC paths as well.
|
|
[15:12] But on
|
|
[15:14] Linux,
|
|
[15:15] in order to do a folder monitor, you actually need to attach it. And this is what becomes
|
|
[15:20] this is one of the big conversations you need to have with your team when you're going to cloud,
|
|
[15:25] come
|
|
[15:26] back to what Kamal was asking about cloud, is because folder monitors, unless you are monitoring your own folders,
|
|
[15:33] which is weird sometimes.
|
|
[15:35] But if you are monitoring someone else's folders that are attached when you move to the cloud, you cannot
|
|
[15:41] monitor them anymore.
|
|
[15:42] So you have multiple choices. One of them is SMB
|
|
[15:47] or and, yes, if
|
|
[15:52] okay.
|
|
[15:53] So if you have so if you have local folder that you're monitoring and you're moving to a cloud, you need to find out how to get those files now. You cannot attach it to the cloud anymore. So it's SMB, or you can leave a CFT behind
|
|
[16:07] that does the monitoring for you and pushes it pushes into ST.
|
|
[16:12] So about route so the next question from Joel, a follow-up. What do we primarily use route with external connections?
|
|
[16:19] So
|
|
[16:21] yes and no, because I think you are I don't know how much ST you know, because I think you
|
|
[16:28] either you have
|
|
[16:29] either I'm missing the question or the terminology is a little weird. Routing happens once the file arrives.
|
|
[16:36] It doesn't matter if we pull it from somewhere or it get sent to us over a inbound connection,
|
|
[16:44] but routing triggers when we receive a file. And SDC event days, so you don't need to do any listening for it or any folder monitoring for it or anything. The folder monitor allows us to bring files that are not in ST already.
|
|
[16:57] But as soon as the file hits the subscription folder correctly,
|
|
[17:00] which means from a pull or a plot
|
|
[17:03] with and in case of errors and things like that for special cases, but let's think positive case only.
|
|
[17:09] We trigger the routing at this point. So you use the routing to process the file. It can be something as easy as just moving the file somewhere else, delivering the files, encryption, decryption, or anything like that.
|
|
[17:23] So
|
|
[17:29] So
|
|
[17:32] and so
|
|
[17:33] from the chat,
|
|
[17:35] because I don't see didn't get everyone is reading it. So Joe is a kinda
|
|
[17:41] new admin. He inherited a configuration that is now erroring a lot.
|
|
[17:46] You can look into that, but be very careful with folder monitor. So I don't know what your use case is exactly.
|
|
[17:53] You will still need routing. The folder monitor will so
|
|
[17:57] think of the routing as the second step. The folder monitor can only get the files from somewhere into the routing. So
|
|
[18:05] if the files are arriving
|
|
[18:08] already,
|
|
[18:10] then you will need the routing anyway.
|
|
[18:14] It's not either or. They work in conjunction.
|
|
[18:17] And you can have so if the file is on a on a
|
|
[18:22] somewhere sitting,
|
|
[18:24] you'll need the folder monitor to bring it in, which drives the file into AR, into routing to actually send it out.
|
|
[18:32] You the server is event based, so you don't need a folder monitor to listen for files arriving.
|
|
[18:38] That happens automatically.
|
|
[18:40] This is what usually
|
|
[18:42] happens with newer admins when they are used to the servers that are that require a listener for that as it does it automatically for you.
|
|
[18:51] So
|
|
[18:52] internal LAN transfer so you have a mix of internal LAN transfers and external SFTP. This is for deliveries out
|
|
[19:02] or for bringing files in.
|
|
[19:06] So if it is for deliveries out,
|
|
[19:09] if you are delivering internally with the folder monitor,
|
|
[19:17] then
|
|
[19:20] are you delivering with the folder monitor to a to another to to a place elsewhere?
|
|
[19:28] Because if this is what you're doing, that what George is saying in the chat over there is correct, you can try to reduce the number
|
|
[19:35] of
|
|
[19:36] con concurrent connection on the transfer site on the folder monitor so we send the files slower out
|
|
[19:43] to see if that might help.
|
|
[19:45] There are things like that that you can actually
|
|
[19:48] troubleshoot.
|
|
[19:48] I would recommend
|
|
[19:52] I would recommend Joel to open a ticket with support with your errors for them to take a look at it and see if they can spot where you have a disconnect.
|
|
[20:01] Because it's a little hard to try to give an advice
|
|
[20:05] without actually seeing the configuration,
|
|
[20:08] but the basic things is, number one,
|
|
[20:11] and you've been talking to them as well.
|
|
[20:14] Joe, you know what? Send me a mail with the ticket number or with the use case, and I'll try to catch up catch up and at least try to see what the use case is all about. Because sometimes it's just a question
|
|
[20:26] of
|
|
[20:27] figuring out where how to optimize what's going on. One of the big things and one of the challenges of SD is that it's not really
|
|
[20:38] one thing that works exactly the same way for everyone.
|
|
[20:41] So what is the best case and the best practice for someone turns out to be worse practice for someone else because of either volume or because of how their storage is behaving
|
|
[20:52] or something like that. And there is probably 20 different ways to set up pretty much anything on this server.
|
|
[21:00] And sometimes
|
|
[21:01] people overcomplicate
|
|
[21:03] things.
|
|
[21:04] Casing point has not seen the obvious way to set up the a s two outbound
|
|
[21:10] inbound. Sorry, Hans. I'm not picking up on you. I'm just pointing out that you had to step away from the thinking that everything should be on the site and actually see it outside.
|
|
[21:20] Right?
|
|
[21:21] So I think there might be something like that. So, Joel, I'll need more details. So just ping me outside of that. I'll try to get to you in the next couple of days, and we'll see if I can help a little bit. Okay?
|
|
[21:37] Yep.
|
|
[21:38] Drop me a mail, Joel.
|
|
[21:41] Okay.
|
|
[21:43] So I have another question in the chat from Jorg, and then next will be Jake.
|
|
[21:48] So Jorg migrating
|
|
[21:50] from
|
|
[21:54] from April to October.
|
|
[21:56] Right?
|
|
[21:58] Most recent update will not show why an admin password the the the tape for sure.
|
|
[22:04] The most recent updates, still not sure why an admin password of the attempt failed. Just says one of the rules was not met, but doesn't list the rules unlike the industry standard.
|
|
[22:14] I don't know about that, Jot. If it's still happening, go open a ticket.
|
|
[22:19] Yep. Well, we're testing out the latest version
|
|
[22:22] tomorrow in dev, so I'll check that out and see if it's still working that way.
|
|
[22:27] But yeah. You're you just left a guess if you don't know the rules or you have you you didn't get into.
|
|
[22:33] It's it may be in a different area, but when you're actually doing admin,
|
|
[22:38] changing your password, it does not alert you to what the rules are. And it's like Think that That would be an easy UX change
|
|
[22:47] if if Yeah.
|
|
[22:49] So so part of the so we are redoing
|
|
[22:53] the admin and changing from one technology to another slowly.
|
|
[22:57] So some of the things just drop off
|
|
[22:59] the router occasionally.
|
|
[23:01] I don't
|
|
[23:03] think I've even ever tested changing a password without following the rules on the admins just because I don't test this kind of things. Right? I've been playing with users mostly. So it's possible that we we just dropped it somewhere.
|
|
[23:15] So if it's still behaving that way,
|
|
[23:19] just open a ticket because we're not going to fix it if we don't know about that. And I can guarantee you that's not the use case anyone in support or PSO or field or me will just stumble on.
|
|
[23:30] We just don't play with admin accounts that much unless someone points it out. And, you know, as soon as we're done with this meeting, I'll probably go to the my server and
|
|
[23:39] set up a policy and see what I can figure out.
|
|
[23:42] But Yeah. The but but open a ticket with support because we and as you said, it's an easy fix. It might be just a question of someone missing a point somewhere.
|
|
[23:52] But I agree. We should be showing the rules you are breaking because otherwise,
|
|
[23:57] how
|
|
[23:58] are you to guess what's wrong?
|
|
[24:00] Yeah. So,
|
|
[24:02] basically, you just type in something that's it could be too long, you know. So it takes a it takes a bit of time to to figure it out. So, anyway, related to that. So so before you go to the related to that, George, just one thing.
|
|
[24:15] Whenever that happens, as a workaround in the meantime, don't forget that the password policy for admins and users is the same.
|
|
[24:23] So
|
|
[24:24] go
|
|
[24:25] try to change the user password instead,
|
|
[24:28] and it will give you
|
|
[24:30] the
|
|
[24:31] proper
|
|
[24:32] rules over there. We need to fix it, but as a workaround for your admins,
|
|
[24:36] for now,
|
|
[24:37] change the user password to assure them the loss.
|
|
[24:41] Just saying. Oh, okay. Alright. Thank you for calling. That's that's one of the strengths of the server and one of the best things. You know? Of course, if you're on the edges, there is no users there and it can be a different policy, so not that easy.
|
|
[24:55] But if it's down on the servers, just use use the fact or,
|
|
[25:00] you know, side effect of the fact that we have a single policy.
|
|
[25:05] Uh-huh. Yeah. And related to that, somewhere down the road map is going to be integration
|
|
[25:10] with,
|
|
[25:11] you know, storage repository
|
|
[25:13] password repositories.
|
|
[25:15] So I'm hoping password state's gonna be one, but I'm sure there's there's other ones.
|
|
[25:20] AD,
|
|
[25:21] hopefully, that's part of it. You know? So that way, you know,
|
|
[25:25] we will just use whatever the current was because we do enforce, you know, a, you know, thirty day password
|
|
[25:32] change policy internally.
|
|
[25:34] So, you know, it would be never an issue for our admins or for our users if we could integrate that. So
|
|
[25:41] do you know when that's gonna roll out?
|
|
[25:46] Sorry.
|
|
[25:47] I will apologize because I was typing a quick answer in chat, so I missed the last
|
|
[25:52] part of the con part of that
|
|
[25:54] question. Oh oh, the integration
|
|
[25:57] with external
|
|
[25:59] password repositories
|
|
[26:01] for admin users,
|
|
[26:02] you know yeah.
|
|
[26:04] We're
|
|
[26:05] slowly starting to roll it in in the next year, probably. So at the moment, we have the secret file, the tyre file that can from can come from a vault,
|
|
[26:15] And we're starting to slowly roll more and more external storages.
|
|
[26:21] So if you have a specific one that you need, open an idea in the ideas portal
|
|
[26:29] because they were looking for
|
|
[26:31] a priorities.
|
|
[26:32] I know that the next priority will be the s three sites,
|
|
[26:36] and I don't forget don't remember which is the other site. So s three and one of the other modern sites
|
|
[26:42] will also get their signature valid.
|
|
[26:45] I I've been commenting on on those tickets related to that for, like, two years now. So I'm just wondering.
|
|
[26:50] I'm gonna We're getting there.
|
|
[26:52] So Yep. See, if we get Astra six months ago, we had nothing whatsoever. Right?
|
|
[26:57] Now we're speaking slowly.
|
|
[26:59] It's
|
|
[27:00] part of the challenge in doing it is that there is so many places in the code that that relies on the password or
|
|
[27:08] the code being just there in the database that we don't put it anywhere else. We just, you know, go to the cache or to the past to the database whenever we need it. Right?
|
|
[27:19] So Mhmm. In order to do it externally,
|
|
[27:23] we need to find all of those places and standardize the call of how to grab that password so that it can be replaced and put where it needs to be or a secret.
|
|
[27:33] So transfer sites, in a way, are easier than admin accounts
|
|
[27:37] in a lot of ways
|
|
[27:39] just because but one thing I'll point out for the admin accounts now that we have the OAuth two plugin properly done,
|
|
[27:47] as long as you have OAuth two, you are all set because you can pull all the passwords into o through OAuth two.
|
|
[27:55] And for the ones that haven't seen that yet, this also allows you to not have local admins as at at all and to actually
|
|
[28:06] assign roles from outside.
|
|
[28:08] So
|
|
[28:09] which
|
|
[28:10] is better than voting just the password.
|
|
[28:12] So Yeah. Please come to the new world of August, September. I don't remember when we pushed it. But the new OAuth, and you want to actually look at it because it's important, it's a plug in. But look at it because for the first time ever,
|
|
[28:26] we allow admin users to be
|
|
[28:30] authorized outside of ST. And when I say first time ever, I'm nineteen years into this company, nineteen and a half. It's really the first time ever.
|
|
[28:40] We had never allowed admin users not to exist locally, and now we do for the first time. So and that's what you might want to look at. Because for me, that's much better than just voting a password.
|
|
[28:53] Yeah. SOC audits are getting a little bit more intense.
|
|
[28:57] So that that could be one way around Yeah. Securing.
|
|
[29:01] Yeah.
|
|
[29:02] Off
|
|
[29:03] the point, but to Joel's point is that I've had the same problem where it's been performative. It's like you get all of a sudden, a server wants to dump out, like, you know, 10,000 files, and then so the the actual just can't can't keep up with it
|
|
[29:19] because it's using SMB,
|
|
[29:21] pulling it out. So we get the sandbox errors. We're getting, you know,
|
|
[29:25] you know, misses. And so I I
|
|
[29:27] for our old local installation,
|
|
[29:30] you know, I was getting up upwards of 5%
|
|
[29:33] error rate. And so for thousands of files, that was pretty bad, and that made meant, you know, actually manually recovering it. So that's where another one of the ideas that I put out there and supported other people that's ideas was batch processing. It's like picking up the files at a batch, opening one connection,
|
|
[29:51] pushing those files up as a batch,
|
|
[29:54] and then closing the connection.
|
|
[29:56] That would solve lot of problems. And
|
|
[29:59] this is where they're also looking at it. As as you know or don't know, the SSH site now can do batches in a way. Although it it's still multiple connections usually, but it's pulling the connection. So if you use SSH and you enable it,
|
|
[30:13] it will,
|
|
[30:15] open the connection, but leave it open for a little while longer if there are more files coming. So if you have 10,000 files going over SSH,
|
|
[30:22] it will reuse the connections. It's the reusability.
|
|
[30:26] They're
|
|
[30:27] doing studies on which of the other protocols can do that safely.
|
|
[30:31] Part of the challenge with SMB is because it's literally a dynamic mount
|
|
[30:36] in memory,
|
|
[30:38] and those can be very
|
|
[30:40] brittle
|
|
[30:42] if you don't close them, and they can actually get the other server in trouble if we lose the connection without closing it.
|
|
[30:49] So but SMB is on the highly priority list. They're looking into it. Will it come?
|
|
[30:56] I don't know.
|
|
[30:57] Yeah. I know that they were looking into it as one of the sites that they tried will try to do that for.
|
|
[31:03] So the latest version of the SSH protocol, you say, supports
|
|
[31:08] leaving a connection open to for multiple files?
|
|
[31:11] Yes. Hold on a second. Let me go back to my server.
|
|
[31:16] So there is a parameter in the server configuration,
|
|
[31:19] which, of course, I keep forgetting how it was called. I'll just look for it as SSH.
|
|
[31:28] Give me a second. You know, that that that's why I ask people to send questions because I don't forget I don't
|
|
[31:35] remember everything.
|
|
[31:36] Oh, it's s h pull something,
|
|
[31:40] I think.
|
|
[31:42] Here it is. SSH connection pool.
|
|
[31:45] SSH set connection pool. What it will do is to
|
|
[31:50] so the number of connections we will open will be up to the maximum number on the transfer site itself.
|
|
[31:56] And when we're done with the file, we'll check to see if there is an is another file in the event queue going in the same direction. If there is still a file, we'll just open the connection for leave the connection open so the next file can pick it up. If you have 10 connections on as a maximum on the site,
|
|
[32:14] we'll open the 10 connections and we'll keep all of them open if there are still files coming. So if you have the 10,000 files going out,
|
|
[32:21] we'll open up to the maximum number, but we'll never close this. We'll just keep pushing files to the open connections.
|
|
[32:28] So the subscription at that point manages it. It says, okay. I grabbed these files. I'm pushing it through. And they'll just keep going through that route until the last file, and then it'll close connection.
|
|
[32:39] Or until so if there is a delay in the files coming to the event queue, for example, you now have a huge amount of big files just coming in and they're still not in the queue when we look, we might close some of them. But the reality is that if you have a lot of files, those will stay open a lot more. So because in the current outside outside of that, the way SQ works is we open the connection, we send the file, we close the connection one by one.
|
|
[33:04] This just works with a pool.
|
|
[33:07] Yeah. And the other reason for that is the remote site may not like it. And so we've had them reject our files just simply because we're spamming them with connection APIs. So but this is so especially if you so in my experience, there is no reason not to enable that.
|
|
[33:21] Because
|
|
[33:22] it will if you have just a couple of files and nothing else is coming, it will not keep them longer.
|
|
[33:28] You can control,
|
|
[33:30] if minimum infect how what is the minimum time after which will evict
|
|
[33:35] the threat out from the pool and the time between evictions? So how often to look for connections to check if they still need to stay alive?
|
|
[33:44] So if you enable that, there shouldn't be a performer. I haven't heard of a customer that yet enabled it and seen a performance degradation.
|
|
[33:53] Even if it never is in use, what will happen is that your connections just stay open a little longer if there is another file, but we're trying to be intelligent. We usually look at the queue.
|
|
[34:04] But it doesn't always kick in properly, so sometimes it might not really do that.
|
|
[34:10] But even then, fifteen seconds later,
|
|
[34:13] it will kick it out, and it will close it. So you can see single connections take a little longer,
|
|
[34:19] but when you have more files, they'll go faster. So if you have this use case often,
|
|
[34:25] you really want to enable that.
|
|
[34:28] Yeah. I usually limit my transfer sites and the routes and the subscriptions for
|
|
[34:35] concurrent connection with parallel In which case,
|
|
[34:38] I strongly recommend you enable that so that they can be reused, and they can be reused both for pulls and pushes. So it's not just for pushes, it's for sits.
|
|
[34:46] So if you are pulling from the same it it it also will work on for pulls.
|
|
[34:52] Just saying.
|
|
[34:53] Okay.
|
|
[34:54] But it is s s h only.
|
|
[34:57] And they are looking at other protocols. They need
|
|
[35:01] feedback of which protocols are the most needed. They know about SMB and s three because we know this is where the big amount of data usually are going. Folder monitors not counted because they are very, very weird that way.
|
|
[35:15] But
|
|
[35:16] yeah. So that's it on that.
|
|
[35:19] Right. Makes sense? Yeah.
|
|
[35:22] And it is all it's all or nothing. You don't specify anything on the transfer side. So you enable it once. That's it. It will trigger for every single transfer side you have. But, again,
|
|
[35:31] if you don't need to use the pull,
|
|
[35:33] it won't it will just slow down. You'll see the connection closing a little later than usual for some cases,
|
|
[35:40] which is okay. No one cares for the most part. You know, we'll close it sooner or later.
|
|
[35:45] But it will help you talking You're about
|
|
[35:48] pull from partner. What about send a partner? Is that the deal? Yeah. Yeah. Yeah. It's set. So server initializes both pull and push. It doesn't matter if you're sending the files out or getting the files from them. It just controls
|
|
[36:01] when the transfer site open the connection out.
|
|
[36:06] Okay.
|
|
[36:07] Yeah. And, of course, yeah, and, of course, in a cluster, the pools are independent between the cluster nodes, so you'll have two separate pools on the two nodes if both of them are sending at the same time. You know how it works. It's Java pooling.
|
|
[36:19] But it still is better than what we have otherwise. And especially if you are restricting to four connections only,
|
|
[36:25] this should give you performance boost.
|
|
[36:28] Yeah. And I might be able to open up to 10 connections now that, you know, that that if by turning that off. Right? Maybe.
|
|
[36:36] Or at the very least, the fourth connections will will actually allow you to resent a lot more files. The biggest use case where this will give you a performance hit
|
|
[36:47] is,
|
|
[36:49] when you have a huge amount of very small files.
|
|
[36:52] Because, you know, when you have these kilobytes and megabyte files that take forever to do that connection authentication,
|
|
[36:57] send the file, and so on, now they'll be just flying through. So if you pick up a huge a big amount of small files and you can reuse the four or 10 connections instead of opening for each file,
|
|
[37:09] they just file go very fast because we are fast on delivery.
|
|
[37:14] Yeah. Right? That's when you hit the 5% error rate. So maybe
|
|
[37:18] slowing it down is is
|
|
[37:20] this way is better. Yes. This is so and that's what I'm saying. First step, just enable that and do a test with your big
|
|
[37:29] connection, but I would still keep it at four if this is what they're used to.
|
|
[37:33] Okay.
|
|
[37:34] Just saying. And you can open it more for some partners, but leave it smaller for smaller servers, you know, because that's that's what the the transfer site and connection
|
|
[37:43] and subscription
|
|
[37:44] and route level allows you to control specifically for this specific partner. Right?
|
|
[37:50] But enable that if you haven't.
|
|
[37:53] And this had been around for at least a year now, so I'm pretty sure you have it even in the old server.
|
|
[37:58] Okay.
|
|
[37:59] Okay.
|
|
[38:01] I so I'm going back to I know I have a couple of raised hands. I'll be there in a second. Jake, you are next. Just some a follow-up for Angelo because we were having little chat conversation.
|
|
[38:12] Angelo,
|
|
[38:12] so
|
|
[38:14] the way it works is that so the question from Angelo is when you look at the file tracking, usually, there is a,
|
|
[38:22] the MDN over here getting generated,
|
|
[38:24] but not for all transfers or for some for if you're on a new servers that someone else set up, it's missing. The way to make to force ST to create an MDN for every transfer is to go to your
|
|
[38:38] where is my
|
|
[38:40] to the certificates and create a certificate called MDN.
|
|
[38:44] That's all you need to do. This will be the one used for m so if the MDN certificate exists,
|
|
[38:52] ST will generate an MDN for a tracking table. If it doesn't, we will not. That's the control.
|
|
[38:59] Oh. Small letters.
|
|
[39:02] MDNs,
|
|
[39:02] small letters. It just need to be valid. That's all. It can be self signed or you can import another. It doesn't matter. You just need to create it,
|
|
[39:11] and it will start generating MDMs for you.
|
|
[39:15] Well, thank you, Annie.
|
|
[39:16] Okay. Long time long time, by the way.
|
|
[39:20] Yep.
|
|
[39:21] Angelo, I haven't talked to you in a while. What okay.
|
|
[39:25] It's been a while. Yes. Okay. Okay. Me too.
|
|
[39:29] Yep.
|
|
[39:30] That that's it. So that's it, Angel. That's literally it. Okay. I Jay. I appreciate it.
|
|
[39:36] And I lost one. Arpit
|
|
[39:38] or did we lose Arpit altogether?
|
|
[39:41] Because I had another hand earlier.
|
|
[39:44] But in the meantime, Jake, you're up. Hey.
|
|
[39:48] So
|
|
[39:49] my question's kinda related to what George was saying.
|
|
[39:55] It's based around what's the best way to limit the number of outgoing transfers that secure transport is doing because ended up hosting our test server when I enabled a bunch of subscriptions and folder monitors to pull files, like, tens of thousands of files all at once. And they're all going out one transfer site to Azure Blob Storage.
|
|
[40:19] So ST is not very good when we manage to get more files than we can process.
|
|
[40:25] So
|
|
[40:26] is this a normal use case, or is this just a huge amount of files you need to pull once?
|
|
[40:32] Yeah. It's gonna be, like, daily.
|
|
[40:34] So
|
|
[40:37] yeah, yeah, yeah, yeah. What protocol are they arriving on? Are you doing folder monitor to grab them?
|
|
[40:42] Yes.
|
|
[40:44] Okay.
|
|
[40:47] Are they is so how many files we're talking about? 10,000 or 100,000 or thousand?
|
|
[40:54] About 10,000 a day.
|
|
[40:57] And they are they are about the same at the about the same time, so you need to pull them in one big swoop? Or can you go are are they arriving through the day?
|
|
[41:05] They arrive all at once, but it's fine if they get processed throughout the whole day.
|
|
[41:11] Okay. So here here is the challenge with that. ST is not really good at protecting itself because our job what you try to do always is get the files fast as fast in, as fast as possible out so we cannot do a delay. So the only way to do what you need to do here is to delay the pulling of the files in some way, and that's why I was asking how they are arriving and so on.
|
|
[41:34] So 10,000 at the same time from a folder monitor will get them, but your server will basically do nothing else for the next couple a few hours if it doesn't crash.
|
|
[41:44] And the newer servers are better at not crashing.
|
|
[41:48] So it's most likely just just host and just you sit there and wait it out, which is not what you want to do.
|
|
[41:55] So couple of options here.
|
|
[41:58] One of them is see if there is any founding patterns that will allow you to split that into chunks
|
|
[42:06] so that you can grab
|
|
[42:08] a few 100 or a few thousand at a time,
|
|
[42:11] and then use not the folder monitor without
|
|
[42:13] the scheduler, but the scheduled folder monitor for each group
|
|
[42:17] individually at specific times
|
|
[42:20] and
|
|
[42:21] or something along these lines or
|
|
[42:23] use a send pull from partner or or you cannot use that, actually. Or use an API to do the pull or something like that. We're just that that's the one use case where ST is extremely weak, unfortunately.
|
|
[42:37] Is this an external
|
|
[42:40] partner, or is it someone internally?
|
|
[42:43] It's internally. It's yeah. We're pulling from local folder monitors,
|
|
[42:49] and then we're sending it to our Azure Blob using the connector plug in. So another option here might be to introduce a CFT internally, our other server.
|
|
[43:00] So that instead of ST pulling directly from the directory,
|
|
[43:04] you get CFT to feed them over the PEC protocol or SFTP if you don't like PEC, although I don't know why you wouldn't.
|
|
[43:12] Because
|
|
[43:13] one of the things is that you can never overheal MST with inbound connections.
|
|
[43:18] It usually get overhealed when we greedily go out against a folder to grab a lot of files. And with folder monitors, you don't have that many controls as with other protocols
|
|
[43:28] because for on the SSH or the rest of the protocols, you specify how many connections to open, which will reduce the number of files we can bring in. But still,
|
|
[43:37] if you go against 10 for 10,000 files, we'll still need to put 10,000 tools into the database, so you have the same problem.
|
|
[43:44] So
|
|
[43:45] you
|
|
[43:46] will
|
|
[43:46] the the only way to resolve your use case will be to actually redesign it either to become inbound into ST.
|
|
[43:55] Or how big are those files?
|
|
[43:59] They're, like, half a megabyte each at most. They're not too much. So very small ones. And yeah. And that's the problem part of it. SD is better with bigger files because of all of the prep and post work we need to do. The smaller the file,
|
|
[44:13] the slower
|
|
[44:15] processing for it is because we have so much overhead
|
|
[44:19] between the transfer itself.
|
|
[44:22] So I honestly would look for another way to design the flow, usually by introducing someone that can push the files into ST
|
|
[44:31] or see if they can split them to come at different times,
|
|
[44:36] if possible,
|
|
[44:37] or just find a way to split them into manageable chunks.
|
|
[44:42] And few thousand files won't be a problem, especially on this size. Right? It's just that this is too big because in addition to everything else, every when the file arrives, for every file, put it in the event queue so it can be processed.
|
|
[44:56] Right. Right. And that's what is starting to kill you. Forget about all the connections you're doing out, and
|
|
[45:02] it's not one of the usable connections. So for each of those small files, we open a connection to your
|
|
[45:08] final destination, put the file, then close the connection, which also is a lot of to and forth. Right?
|
|
[45:15] Okay. Yeah. I think I can work with that. Just trying to redesign the flow either on setting up specific file masks or controlling
|
|
[45:24] how fast
|
|
[45:25] or what files come in at certain parts of the day. Yes. Yeah. And worst case scenario, especially because it's an internal partner,
|
|
[45:33] you can also see if you can have a client of some type or CFT is one option, our CFT. Right? Because it's it's one of our servers. We also have a secure client,
|
|
[45:44] which is the client software that can run somewhere
|
|
[45:47] and
|
|
[45:48] or wherever the data is
|
|
[45:51] and
|
|
[45:52] sent to ST. Because all you need is someone to actually open a connection into ST to send them in. Because this will artificially slow them down enough for us to be able to handle them.
|
|
[46:03] Folder monitor is great, but it's also greedy.
|
|
[46:06] That that's part of the challenge. And tools as the goal is where the only way to bring down
|
|
[46:13] I'm not going to say the only way. Okay.
|
|
[46:15] The fastest way to bring down the modern ST is with a pull. I can bring down any environment with a big enough pull. That's the reality of it.
|
|
[46:26] Just because
|
|
[46:27] and they are looking into options to,
|
|
[46:32] what's the word, to protect the server a little bit better, but there is also this fine line between being efficient versus
|
|
[46:39] being protecting itself. Right? So we're getting better,
|
|
[46:43] but still
|
|
[46:44] 10,000,
|
|
[46:45] 100,000
|
|
[46:46] files. I had once a customer that actually had a million files around on a single folder monitor. Guess what happened? The server crashed once.
|
|
[46:54] It's just
|
|
[46:57] yeah.
|
|
[46:58] Sorry, Jake. I was cutting you off.
|
|
[47:01] No. You're
|
|
[47:03] fine. I think I got everything I need from that question. So thanks for your time.
|
|
[47:07] And
|
|
[47:08] Nai actually posted in the chat window. They she has an idea out for outbound restriction as global settings as opposed to always being on the each transfer site individually and so on. So you might want to look at this idea. That's for everyone. Go vote if you think that this will be useful for you as well. Even if yours is a little different, add a comment with your business case. One of the important things when you are opening ideas or voting for ideas or commenting on idea is that the more business cases we have, the better. Don't. As I keep telling people, don't tell us that you want this value to be moved from one place to another. Tell us what is the business case you need solved.
|
|
[47:50] Right?
|
|
[47:51] Because,
|
|
[47:52] otherwise, we'll we'll move the property. It just won't do what you thought it will be doing.
|
|
[47:57] We had quite a few implementations like that. So
|
|
[48:00] thanks, Nay, for jumping so for putting that in.
|
|
[48:03] Arpit, you are next, and then I have in the chat questions from Mark and from Brian.
|
|
[48:11] Oh, hold on a second. Brian's is actually a follow-up. So, Arpit, let me just get the follow-up done, and then it's you.
|
|
[48:17] Brian, with the scenario you're talking about with bringing down a server with a big enough pool, is that due to software limitations or hardware limitations? It's software limitations.
|
|
[48:27] It's essentially because we are bringing in files faster than we can process them.
|
|
[48:33] So all of our engines are getting boggled
|
|
[48:37] into trying to process them. And the way it works in ST is that when the file arrives, we put it for processing. We put a record for processing for it in the database.
|
|
[48:47] So during the pool for each file that arrives,
|
|
[48:50] we not only put it for processing, but if so the
|
|
[48:54] way pull works is that we go for the list first. We get the list of how many files, 10,000, 100,000,
|
|
[49:02] whatever.
|
|
[49:02] And then for each individual file, we create a separate event into the database that says we want pull this file, this file, this file, this file. So when we find the list of 10,000, we end up with first of all, we very fast need to create this 10,000 event into database, but then the team, and both of them usually, or three of them, if you have a big enough cluster, start pulling the events out to process them and get secure.
|
|
[49:25] So
|
|
[49:26] for the most part, the modern STU will protect itself enough not to pull it too many files to actually crash all the way down. But for a while, anything that this server will be doing
|
|
[49:40] will be handling those transfers.
|
|
[49:43] First, the pulse of them, then while the file is pulled, another event is thrown, which is how to process the file. Even if you don't have processing, it's just a pull into ST, we still need to process the incoming end, which is successful file arrival.
|
|
[49:56] But if you have a push after that, it's also going to the queue. So what happens is your server doubles down.
|
|
[50:03] And sometimes,
|
|
[50:04] especially if the processing is very heavy, think PGPs and things that require a lot of memory and so on, you can bring a TM down just because there is too many files doing heavy processing at the same time.
|
|
[50:16] And even though they are trying to,
|
|
[50:19] or and when the server is protecting itself,
|
|
[50:21] it knows, oh, I have too many jobs. I cannot pull others. But if you have 22,000
|
|
[50:29] files or 200 files at the same time into the threads, all of them going to PGP at the same time,
|
|
[50:35] that's where the TM can decide that it's actually safer to crash
|
|
[50:39] so it can get restarted
|
|
[50:41] than trying to deal with what's going on.
|
|
[50:45] With the monitoring server,
|
|
[50:47] you have
|
|
[50:49] so we can monitor, we can restart the TM, and sometimes it's self healing. And with both clusters now, if you are already on the September, October, the new modern cluster also works that way. The server will join automatically when it starts, so it's cleaner.
|
|
[51:04] But this is where your
|
|
[51:08] the weakness of ST can be sometimes, big pulls. And they're looking into options how to reduce the numbers. And even when you specify no more in the 100 connections,
|
|
[51:18] that just that reminds how many connections will open. If we find 10,000 files, we'll still list them all into our event queue and try to pull them 100 at a time.
|
|
[51:29] That explains it,
|
|
[51:31] Brian?
|
|
[51:32] You okay with that?
|
|
[51:34] Yeah. Thank you so much. I actually have some follow-up questions to that. So in terms of,
|
|
[51:40] you know, having one big poll, would that also apply to
|
|
[51:45] essentially
|
|
[51:46] instead
|
|
[51:47] maybe
|
|
[51:48] 10,000
|
|
[51:49] polls occurring at the same time for different connections?
|
|
[51:53] Or is that only scenario existing
|
|
[51:56] for
|
|
[51:58] one big poll from a single connection?
|
|
[52:02] It
|
|
[52:04] depends.
|
|
[52:05] Multi so
|
|
[52:07] usually,
|
|
[52:07] the bigger problem is when the files are coming from the same
|
|
[52:11] place
|
|
[52:13] just because they literally go immediately one after another,
|
|
[52:16] opening connections to the same place, so the network saturation goes the same way for the most part to them, and you have your partner also usually slowing down because you hit them with too many connections.
|
|
[52:26] However,
|
|
[52:27] if you have 10,000 pools that are if whether effectively pulling files,
|
|
[52:33] not just connecting, but effectively pulling files at the same time, you might have a a similar problem. But quite honestly,
|
|
[52:40] if you have that many files coming at the same time, you usually will have four or five nodes at least on your cluster,
|
|
[52:47] which will reduce the troubles. And because they are going different directions,
|
|
[52:50] for the most part, we'll be better at splitting them between the nodes.
|
|
[52:55] So
|
|
[52:57] I had never been able to bring down the server.
|
|
[53:02] So let me say it like that. I've never been able to bring down SQL with too many
|
|
[53:07] different pools going at the same time in different directions,
|
|
[53:11] but I can bring down pretty much any environment or any server anyway with too many files on a single pool. And it has to do with the sheer number of events that gets created
|
|
[53:23] almost instantaneously
|
|
[53:24] when we find a huge number of files.
|
|
[53:28] I see.
|
|
[53:29] And thank you for the library. One in out one in out in out in out. So
|
|
[53:34] yeah. Sorry. Go ahead. No.
|
|
[53:36] You're fine. Thank you for elaborating. Really appreciate it. So would you also say instead of I was gonna have a follow-up question of, like, if there's a specific number that you are aware of.
|
|
[53:46] But I think based on the information you're presenting, it sounds like it would be determined based on
|
|
[53:53] the database connection speed and at the rate at which those can be adjusted. Would that be a fair statement?
|
|
[53:59] That's one of the limitations.
|
|
[54:01] It also depends on the size of the files because even if you have if you find a huge amount of very big files, even though you have this temporary bottling down while we're creating the events,
|
|
[54:13] right,
|
|
[54:14] they will not come out as fast because the pulling will be slower
|
|
[54:18] because they are big files,
|
|
[54:20] so they will not go into push that fast. The big problem with smaller files with huge amount is that while we're pulling 100 files,
|
|
[54:28] the previous 100 are already in processing, so we're dealing with them.
|
|
[54:32] And then at one point, you have at least a 100 coming in. You have a 100 in the middle of processing. You have another 100 already trying to get out, so we have the other the pushes going out. Right?
|
|
[54:43] But, also,
|
|
[54:44] at the same time, we're trying to cycle to the
|
|
[54:47] eve to the existing list and where there is something called internal retry.
|
|
[54:52] So when there is an open, we'll go and check, oh, we want to pull that file, but there is no open connection because all the connections for this transfer side are taken, you know, up to a 100 or whatever you specified. So we'll throw the event back, and this is going go every two minutes. That's what bubbles it down.
|
|
[55:09] So it's really
|
|
[55:11] it's usually not the database connection per se. It's really about the sheer number of files.
|
|
[55:17] And anything over a couple of thousand coming from the same site is usually going to cause issues.
|
|
[55:24] Maybe part of it is this internal retry because we literally will will try to we'll grab the event because it's at the top of the queue. We'll realize we don't have an open
|
|
[55:35] connect we cannot open a connection for it, Right? Because we already have all the available connections open for it. That's why what I showed earlier with the SSH connection, the reusability when it's SSH actually works beautifully
|
|
[55:48] because
|
|
[55:49] when those files keep coming in for work,
|
|
[55:52] the connections in the pool are now free because the file cleared.
|
|
[55:57] Makes sense?
|
|
[56:00] Yeah. Very cool. Thank you so much for that information.
|
|
[56:03] Definitely appreciate it. And and, honestly, if you ask me for exact number, I cannot give you one. It depends on the environment at its own also. It's not a science, and it really depends also on the partner.
|
|
[56:14] Some partners boggle down when you open too many connections to them. Just ask Hans,
|
|
[56:20] for example. No, Hans. Sorry. George.
|
|
[56:22] My bad.
|
|
[56:25] So Very good points. Thank you. Yep. Okay.
|
|
[56:29] Arpit, back to you, and then I have a couple of questions in in chat. So the chat question is Mark. Mark, I'll get to you in a second.
|
|
[56:38] Okay. Yeah. Sure. Just to add to the the previous conversation that you're you're having with Brian.
|
|
[56:43] So we were on a two
|
|
[56:45] TM cluster,
|
|
[56:48] like, before June, and we recently moved to a to a four node cluster. And and we do quite a quite a lot of full operations,
|
|
[56:55] SFTP
|
|
[56:56] operations Mhmm. Full operations, and we have seen a significantly
|
|
[56:59] significant improvement
|
|
[57:00] in that regards, and we we have never seen a problem since we have moved to the four node cluster. If you have the more transaction managers you have into the clusters, the more connections you can open in this, the more you can open the threads.
|
|
[57:17] So it it's about the numbers.
|
|
[57:19] Right?
|
|
[57:19] We don't need so think about if you have two TMs, for example. Those two TMs need to do the pulling, the processing, and the pushing at the same time. Right. Right.
|
|
[57:29] If you add two more PMs, now part of those jobs will go to them. Even even though we'll try to keep them on the same server where the files arrived, when it's that busy,
|
|
[57:40] the other ones will chime in. And when you have too many files, you'll almost see a complete split. So adding another server for this situation will always help. But, again, sooner or later, you'll hit the limit again. Yep. Yep. That's true.
|
|
[57:55] It's just about
|
|
[57:57] how fast you want to grow. But the reason I didn't want to recommend to Jake to add another server is because if it's a single partner just doing that once a day, adding a whole new server just for that is
|
|
[58:09] kinda
|
|
[58:10] too much. Right? That's right. Especially because it will just sit there otherwise. And I don't know, Jake, what your environment is and
|
|
[58:20] if
|
|
[58:21] if you might be overloading otherwise,
|
|
[58:24] you know, adding more than not might be helpful always,
|
|
[58:27] but
|
|
[58:28] it's not
|
|
[58:30] if this is the only use case that requires it, it just doesn't make sense because we don't have auto scaling at the moment, and they're looking into it.
|
|
[58:37] Maybe downstream when we get to the cloud edition with auto scaling, that might be the choice for those because
|
|
[58:44] if
|
|
[58:45] so and that that might be one possible solution in the future for those use cases. You know, if you're in a cloud cloud edition, you can auto scale to 10 to 10 TM's automatically
|
|
[58:55] in this case.
|
|
[58:56] That will handle even a bigger pool.
|
|
[58:58] Right?
|
|
[58:59] So maybe.
|
|
[59:01] Who knows? So we're all it it's just
|
|
[59:04] waiting to see where we're going. Right.
|
|
[59:07] So yeah. So another question, Annie. So we are looking at a use case where
|
|
[59:12] a lot lot of our
|
|
[59:15] users that that use MFT internally in our in our organization want to know the status of their file transfers, more details, and we obviously can't grant them access to the file tracking and
|
|
[59:25] and and we do not use central central as of now.
|
|
[59:29] We're looking at incorporating that in the future. But the the biggest
|
|
[59:34] use case that all of them want want to want to have is
|
|
[59:38] is to know the the flow of their
|
|
[59:41] their MFT setups that that we do for them.
|
|
[59:45] So in in the past,
|
|
[59:47] in in the other previous organization where I worked, we used to have a system of records fee where we we
|
|
[59:54] have all of the the access secure transport account configured transfer sites and and then subscriptions documented in a way that the the application teams can can can go and access themselves and as well as the admins can go and access. And we used to use admin
|
|
[60:09] APIs to to populate that system of records using an API gateway in the middle.
|
|
[60:14] In my current organization, there is no API gateway
|
|
[60:18] as of now.
|
|
[60:19] So would you recommend
|
|
[60:22] targeting the admin APIs using a Power BI directly to fetch these
|
|
[60:28] records and populate into a database or just
|
|
[60:32] populate
|
|
[60:34] on need basis? Or do you recommend having an API gateway
|
|
[60:38] always the user when we try to use an API by the users themselves?
|
|
[60:43] If it is going to be used by the users themselves
|
|
[60:47] Mhmm. Themselves,
|
|
[60:48] an API gateway is mandatory for me
|
|
[60:51] from secure from from pure performance perspective because you cannot control how many of them will hit at the same time.
|
|
[60:59] Right?
|
|
[60:59] If we're talking about something that has 100 users or 10 users or, you know, a low number of users, I'm not that worried.
|
|
[61:07] But even then, having a gateway in the middle protects ST. The ST APIs are very powerful as we know, but they're also
|
|
[61:16] just like with the transfers, we're not very good at protecting ourselves. We're going to try to respond as much as we can. And the problem with the admin API
|
|
[61:25] is that it can bring down the admin UI, which you don't use for operation, so it's not that important.
|
|
[61:32] However, it also goes against the database,
|
|
[61:35] the same database where transfers are. And, usually, you will have when you so what usually happens is that you have more users trying to figure out what happened to their files when there is more files going through the system.
|
|
[61:49] So you have the team loading the databases
|
|
[61:52] Mhmm. Or the database. Right?
|
|
[61:54] At the same time, the admin start loading it for to to to figure out what's going on. So you have the perfect storm
|
|
[62:01] where you have the
|
|
[62:04] you have your peaks at both engines at the same time, and they go into a single database.
|
|
[62:10] Right?
|
|
[62:11] So
|
|
[62:12] I personally
|
|
[62:13] strongly recommend anyone that does anything customer facing or for any any API usage that is not restricted to just a couple of clients here and there. When I say clients, it's technical clients, not customers. Right? Understood. API
|
|
[62:29] gateway
|
|
[62:30] protects you
|
|
[62:31] and gives you a chance to actually throttle.
|
|
[62:34] It gives you a chance to also do additional permissions of what they can and cannot see without relying just on ST to do that if you don't want to.
|
|
[62:43] You know?
|
|
[62:45] As you can do delegated admins, obviously, and so on and so forth. But I just
|
|
[62:51] and if you go that way, by the way, what are you and the other thing that you need to be careful about is the admin is a Java engine. So with the TM, they run on the same box.
|
|
[63:02] What happens when the both of them spike?
|
|
[63:05] Yep. Who is getting the resources? So
|
|
[63:07] I
|
|
[63:08] it doesn't mean it doesn't work without the gateway.
|
|
[63:11] And
|
|
[63:12] as anyone that had been on these meetings know, I'll keep saying it, it doesn't need to be an Axway gateway. Any gateway will do. It's an API. Right? Understood. If
|
|
[63:22] you don't have a gateway, you need to implement some kind of throttling or protection up on the implementation level, whoever is calling us.
|
|
[63:32] But just allowing thousands of requests coming into s three into the ST API at the same time is a really bad idea.
|
|
[63:40] Understood. Understood. Thank you for reassuring, Kevin. Because that's what I I was thinking, and I am trying to convince my management to have
|
|
[63:48] in in between.
|
|
[63:50] And and, of course, if if the quest end, and this is where I'll put it in, if
|
|
[63:56] all they want to see
|
|
[63:58] so if
|
|
[64:00] another option might be creating the system of record just how how we had it before.
|
|
[64:05] Instead of going live against the SD database all the time through APIs,
|
|
[64:11] create
|
|
[64:11] another database
|
|
[64:14] where
|
|
[64:16] you reference from the end users
|
|
[64:18] that get that get updated every six hours, every four hours, every hour if need be. Right? Mhmm. For this, you I will for this, I will still put an API gateway if I can, but it's not as required anymore because you have a single read. Right? You just go and treat every hour,
|
|
[64:35] update the other database, and then the real during operations,
|
|
[64:40] you don't stress the SD database. And that's one of the things I always keep telling people. The more things you can pull out of ST and put somewhere else that is not file transfer related,
|
|
[64:51] the better for us because those resources are shared.
|
|
[64:55] Any resource we used for something else is not used for moving your files.
|
|
[65:01] Doesn't
|
|
[65:02] make much of a difference on small environments,
|
|
[65:04] make a huge difference if you are on a, you know, million files per day environment. Yep. Yep. That's true.
|
|
[65:11] Yeah. And I can share, if it's helpful, what we're doing. By the way, hi, Arpit.
|
|
[65:16] Hey,
|
|
[65:18] How are you?
|
|
[65:19] Good.
|
|
[65:20] Our our bit knows a little bit about our environment as well.
|
|
[65:24] But
|
|
[65:26] I can at least maybe
|
|
[65:29] share some of the things we're doing
|
|
[65:31] or the direction we're heading in case if it's helpful to you. Yep. So
|
|
[65:38] we've broken down those use cases into kind of two different
|
|
[65:42] meaning, like, the whole track and trace
|
|
[65:45] use case of folks being able to kinda self-service their own data.
|
|
[65:49] We've broken down into two different use cases. One is an analytical use case, which is more like historic
|
|
[65:55] data, like a lot of historic data over time.
|
|
[65:59] And then the other one is a transactional use case, which is more like real time data.
|
|
[66:05] We
|
|
[66:07] have all of our track and trace
|
|
[66:09] stuff fed into Sentinel today,
|
|
[66:12] which in in in my mind has been very very helpful.
|
|
[66:17] But
|
|
[66:18] so I would certainly advocate for Sentinel if you don't already have it today. But
|
|
[66:22] so we we have all the track and trace data going over to Sentinel.
|
|
[66:25] And
|
|
[66:27] we
|
|
[66:28] for the analytical
|
|
[66:30] stuff,
|
|
[66:32] we have
|
|
[66:33] we have most of that snapshot into Power BI today
|
|
[66:37] just because doing,
|
|
[66:39] like, a lot of data,
|
|
[66:41] like,
|
|
[66:42] trending,
|
|
[66:45] how many, you know, total transactions do certain partners have,
|
|
[66:49] break down, you know, maybe a business unit into individual accounts, like highest partner and stuff like that. A lot of that we found is far quicker in a tool like Power BI.
|
|
[67:00] The only restriction there is you're kinda limited to the data snapshots
|
|
[67:05] that Power BI has.
|
|
[67:07] So meaning, you refresh the data every, like, four hours or
|
|
[67:11] daily or or whatever you wanna do there. Right? Becomes very expensive
|
|
[67:16] to do it near real time.
|
|
[67:19] But it it it's a good, like, statistics dashboard
|
|
[67:22] for the partners that need that kind of data.
|
|
[67:26] And then for real time, which is more kind of like the ops use case of,
|
|
[67:31] hey. Did a file come in?
|
|
[67:34] Sentinel has been really helpful in that space. And I know they're actually doing a lot of UI uplifting in Sentinel as well.
|
|
[67:42] The
|
|
[67:43] UI components,
|
|
[67:44] they're adding in some, like, Grafana like UI components
|
|
[67:48] to their new dashboards. Yeah. So
|
|
[67:51] I I think there's
|
|
[67:52] you know, if you're looking at implementing something new now,
|
|
[67:57] Sentinel would definitely be something good to look into, you know, provided you have management support. I
|
|
[68:03] agree. For operational data,
|
|
[68:07] when you look for status of the files
|
|
[68:09] Mhmm. That's
|
|
[68:11] that's the correct answer here. I'm sorry, but it is. Get sent, you know, into the picture.
|
|
[68:16] Okay. Wonderful. Yep.
|
|
[68:18] If you are looking to go against admin
|
|
[68:22] UI for
|
|
[68:23] configuration,
|
|
[68:25] because that's what I also heard from you, Arpit, then for that, go with another database. Or, you know, Sentinel can create you can have a custom tracking object in Sentinel, and you can keep track of that over there, making it your
|
|
[68:40] making the dashboard or whatever out of it. Just saying.
|
|
[68:44] Yeah. And if you had to, you could also DB link. Like, in the Sentinel database, you could create a DB link back to the ST database.
|
|
[68:52] Not for write purposes, but if you needed to, like,
|
|
[68:56] kind of, like, paint over subscription type information
|
|
[69:00] over track and trace data,
|
|
[69:02] sometimes that that's helpful too. So, yeah, that's going to be Don't highlighting Jeff here. Don't do that.
|
|
[69:08] Don't do that?
|
|
[69:09] No.
|
|
[69:10] Don't Okay. You
|
|
[69:12] really shouldn't do any read into the SD database from outside like that. But if you want to do a copy of your ST database and link that way into it, then you can do whatever you want. Just don't do it on the live database is my point.
|
|
[69:27] Just even so even from, like, a read only perspective? Yes. So, like, you you create a link in Sentinel database.
|
|
[69:34] You're not creating any link in ST. Right? I know. I I know. The the official support line,
|
|
[69:40] and I support it completely, is that there shouldn't be anything outside of ST connecting to the ST database
|
|
[69:48] regardless if it is read only or not.
|
|
[69:51] Ah, okay. So the preference is, like, API ingress versus database? Yes.
|
|
[69:55] API ingress is the recommended recommended
|
|
[69:59] and the only supported way.
|
|
[70:02] Going against the database,
|
|
[70:04] I know that you usually know what you're doing, so I'm not that worried when you say that, Jeff, that you're doing it, although I'll slap your hand if I do an audit about it.
|
|
[70:13] But
|
|
[70:14] the problem is that even with read only access, you still are consuming resources on the database, connections on the database, and so on, which SDK knows
|
|
[70:23] or should use on spikes.
|
|
[70:26] Okay. It's it's it's just the the official line in the ST world is that you don't touch our file system where the files of users are going, the home folders folders, and
|
|
[70:38] the
|
|
[70:40] database
|
|
[70:41] outside of the ST connections.
|
|
[70:43] Because that way, you can actually do proper resource management.
|
|
[70:47] You can do
|
|
[70:49] proper connection management. You don't need to account for those initial connections. Doesn't mean your DBA cannot open a connection to look at tags. It's more about don't have 20 read connections from someone for something.
|
|
[71:01] Yeah. I gotcha. Okay.
|
|
[71:03] Well, in that case, if you can get a dashboard or Sentinel dashboard or something to invoke APIs, then I guess that's the approach.
|
|
[71:10] Yes. Yeah. Obviously,
|
|
[71:12] the more longer term approach. Right? And but you also have another option, which is remember that Sentinel can have
|
|
[71:19] custom tracking objects. So you can create a separate tracking object
|
|
[71:23] and then use a scripting to bring the data into the tracking object every few hours or whatever from the live server for the configuration part.
|
|
[71:32] So they could bring in that data.
|
|
[71:37] So you you could bring that in through an a really? Okay. That's interesting. I don't think we thought about doing that.
|
|
[71:44] But you can do the API. Although, if it's just configuration data, I would usually just do good old XML export,
|
|
[71:51] pass the XML and push, you know, run XML export,
|
|
[71:55] and then pass the XML and push this data into the tracking object on the Sentinel side. That's the cleaner way sometimes because
|
|
[72:02] XML export is actually faster, cleaner, and better than API calls if you want a lot of data.
|
|
[72:08] Right? That's true.
|
|
[72:10] But it's on the OS level. So it's
|
|
[72:13] there is a lot of options. My point is you need and the point here is you if you want the configuration
|
|
[72:20] to be visible from external partners and there is a lot of them,
|
|
[72:24] don't give them access directly into SD even through an API gateway. If you can go around that, if you find another place, it's better.
|
|
[72:32] Yep. That's true.
|
|
[72:34] That that that's that's it. That's the reality of it. Right? That's true. So the solution that we are looking at implement so far, it's it's still in in in in ideation phases,
|
|
[72:44] something sort of an external system of records where which gets populated, which the users can can get access to, and then, obviously, it fetches data
|
|
[72:53] from from from the ST via the APIs. But I think I'll I'll look at the other option that you and Jeff just discussed. So thank you very much, Okay.
|
|
[73:01] Mark,
|
|
[73:02] up to your question. Sorry. We got boggled into questions.
|
|
[73:06] In upgrading secure transport, there is a step where the install file is executed. Looks like it's serial and not in parallel even though the services are stopped. Do you mean serial
|
|
[73:18] between the different nodes, or do you mean serial on the same node, Mark?
|
|
[73:23] Serial with different nodes.
|
|
[73:26] So the reason it's done that way is historical.
|
|
[73:30] Essentially,
|
|
[73:31] what you'd never want to happen and well, let's not call it historical. But here, the the
|
|
[73:37] there is no reason not to do it in parallel except
|
|
[73:42] that you need to ensure
|
|
[73:45] that you never have the two nodes
|
|
[73:49] starting at the same and this is for the old cluster, by the way. The two nodes starting on different cluster
|
|
[73:55] versions
|
|
[73:57] So that if you could have if you can make sure that
|
|
[74:01] you never have one of the nodes starting on on update
|
|
[74:06] April while the other is already in December
|
|
[74:08] at the same time, you would have been fine.
|
|
[74:12] But now with the new cluster that were with the postgres, remember that we all go against a single database. So if you go in parallel now,
|
|
[74:21] the challenge is that
|
|
[74:24] because
|
|
[74:25] which database will be used during the connection and who will declare primacy.
|
|
[74:30] That's part of the challenge with the new cluster. So that's the only reason. It's really about databases on the new one. On the old one, it was more a precaution than anything. No one could have stopped you to running them in parallel.
|
|
[74:45] Okay.
|
|
[74:46] Got it. That makes sense. Thank you. Yeah. But in the new one, I strongly recommend not to try the parallel. However, if you have three notes,
|
|
[74:54] As long as the first one is fully updated, you can run the other two in parallel.
|
|
[74:58] Don't, you know, don't tell support I told you so, but technically, there is no reason because the live database is already up.
|
|
[75:05] But in the new in the new
|
|
[75:08] cluster model, you know, you can run them in
|
|
[75:13] as long as the first one is up, you're good. And then on on the edges, on the other hand, because
|
|
[75:20] same thing apply, but if they are independent edges, running them in parallel is not a problem at all.
|
|
[75:26] Part of the reason why we tell people to do it one by one is in case there is a failure,
|
|
[75:31] you so in case the update fails, the other node is not touched yet, So you can actually recover from it and
|
|
[75:39] go live back with it while you are recovering this node. If you run a parallel in run it in parallel, you just screwed up both of your servers.
|
|
[75:48] So as I said, precaution.
|
|
[75:51] Okay. That makes sense. Thank you.
|
|
[75:54] Yep.
|
|
[75:56] And
|
|
[75:57] this was for standard cluster, by the way. Enterprise cluster is a little bit more
|
|
[76:03] so unless you're doing ZGU where the rules are totally different
|
|
[76:07] with standard updates on enterprise
|
|
[76:10] cluster, you want the first update on the first node to always complete before you start doing anything on the rest of the node because the first one is the one updating the database. That's why it was happening now in the new
|
|
[76:25] new cluster as well, the new modern new new standard cluster. Right?
|
|
[76:30] But that's part of the reason. And ZTO is totally different part of the house.
|
|
[76:38] So okay.
|
|
[76:41] Mark, which update are you on these days?
|
|
[76:46] We are on the June
|
|
[76:48] of this year. Okay.
|
|
[76:49] Okay.
|
|
[76:50] So keep in mind that your next update into October or later will change your databases
|
|
[76:56] and the model and everything. So you might want to be very careful on prerequisites.
|
|
[77:01] Just heads up because you're standard cluster.
|
|
[77:05] I'm sorry. I didn't catch that last part. To be careful about what?
|
|
[77:08] Prerequisites
|
|
[77:09] on the update because we're changing the model. So it's not just replacing the database. We're changing how it works. Right?
|
|
[77:17] So both the nodes on the standard cluster will talk to the primary database on the primary server
|
|
[77:23] in the new model. So when you update next time, this will come in. So what I'm saying is on your next update, read the release notes and make sure you have all of the pieces in place, including additional users if needed and so on. So and you will need to open a new port between the two servers.
|
|
[77:41] Okay. Got it.
|
|
[77:42] Just just catch up. That's just for anyone on standard cluster,
|
|
[77:46] and it also applies to anyone with edges because this happens on the edges. If you have edges which are
|
|
[77:54] synchronizing
|
|
[77:55] between them
|
|
[77:56] each other,
|
|
[77:58] If you update to
|
|
[78:00] September
|
|
[78:01] or October or later,
|
|
[78:03] the database changes to Postgre, and the synchronization model changes completely.
|
|
[78:07] So
|
|
[78:08] you might need to open ports.
|
|
[78:11] Okay.
|
|
[78:12] Any other questions? Mark,
|
|
[78:15] any follow ups?
|
|
[78:17] Oh, no. I think that was that was sufficiently answered. Thank you so much.
|
|
[78:21] Of course. Okay.
|
|
[78:23] So we're almost at time. So but we still have time for another question
|
|
[78:29] or two.
|
|
[78:30] So any anyone else? If not, back to Nicole.
|
|
[78:34] So
|
|
[78:36] last call for questions.
|
|
[78:38] No?
|
|
[78:40] Okay. Nicole, back to you. So Nicole will be we will get the so there will be one more of those this year in December,
|
|
[78:51] which still haven't been scheduled, and it will be scheduled in the next couple of days.
|
|
[78:56] So it's on the
|
|
[78:58] December 12. It's on the twelfth.
|
|
[79:02] So I'll be happy to see all of you again.
|
|
[79:05] Again, if you have questions beforehand, please send them over because it makes it a little faster and easier at the beginning, and I usually have something prepared if needed.
|
|
[79:15] And if you happen to live in Dallas or Sacramento area, I'll be we're having user groups in person in Dallas next week, Sacramento the week after.
|
|
[79:24] So come talk to us.
|
|
[79:27] And back to Nicole.
|
|
[79:30] Yep. So a few slides to end.
|
|
[79:34] Thank you,
|
|
[79:35] Annie, as usual.
|
|
[79:38] So just a little reminder
|
|
[79:40] about our
|
|
[79:42] on the our community portal
|
|
[79:45] where you can post your ideas to
|
|
[79:48] enhance the
|
|
[79:50] product,
|
|
[79:51] secure transport.
|
|
[79:52] You have all the info about the future user groups, and you can
|
|
[79:58] register from there. If you missed the email,
|
|
[80:01] the invitation email,
|
|
[80:03] you can have a look at the road maps, the
|
|
[80:07] next release content,
|
|
[80:09] and future
|
|
[80:12] release also.
|
|
[80:14] And you can also post your questions or your comments, and this have answers from our experts, but also from your peers.
|
|
[80:26] You also have
|
|
[80:29] videos posted on the
|
|
[80:32] Axiom Feet
|
|
[80:34] YouTube YouTube channel,
|
|
[80:36] so have a look.
|
|
[80:38] And
|
|
[80:40] you there is the g two platform
|
|
[80:44] where you can post your
|
|
[80:47] feedback about the product,
|
|
[80:49] and it helps other customers that are are looking for a solution
|
|
[80:55] like
|
|
[80:56] Secure Transport, and they
|
|
[80:59] are trying to find who
|
|
[81:01] has
|
|
[81:03] a good product. So if you are happy
|
|
[81:06] with SecureTransport,
|
|
[81:08] help your
|
|
[81:10] future peers
|
|
[81:11] and leave a
|
|
[81:13] little
|
|
[81:14] review.
|
|
[81:15] And to thank you for the time you're spending on that,
|
|
[81:19] we
|
|
[81:20] sent you a little gift.
|
|
[81:22] So don't hesitate.
|
|
[81:26] And with that, yeah, you will be receiving soon
|
|
[81:30] an invitation
|
|
[81:31] for the virtual the next virtual, which will be our last
|
|
[81:36] ask Annie virtual session
|
|
[81:39] scheduled for October 12. So coming soon.
|
|
[81:43] Thank you. December, not October.
|
|
[81:46] Believe it or not, it's already November, Nicole. But did I say October? Yeah. Alright.
|
|
[81:52] Getting late here. Thank you, Adi.
|
|
[81:56] December 12.
|
|
[81:58] December 12, last for the year, and we'll be back next year again. So you are not we're not stopping the program. It's just that the year is ending somehow. We don't know how it happened.
|
|
[82:10] Yeah. And you will be receiving as usual
|
|
[82:14] post event survey. Don't forget to answer. It shows
|
|
[82:18] how much you appreciate
|
|
[82:21] those event.
|
|
[82:23] We we look our management look at the
|
|
[82:26] answer rate,
|
|
[82:28] the number of answer we receive.
|
|
[82:30] So there is the the the the answer you give, but also the fact that you take few minutes to
|
|
[82:38] answer shows how much you appreciate our program. So don't hesitate
|
|
[82:44] to answer.
|
|
[82:46] And with that, I thank you all and
|
|
[82:50] see you
|
|
[82:51] next month.
|
|
[82:53] Thanks
|
|
[82:54] everyone and thanks everyone for
|
|
[82:57] sharing their experience and what they are doing as well. That's the whole point of these meetings.
|
|
[83:02] Have a wonderful rest of the day and rest of the month, and happy holidays to the to The US side of the house, and talk to you next month.
|
|
[83:11] And come talk to come see me in Sacramento and Dallas, please, if you are close by.
|
|
[83:17] Bye.
|
|
[83:19] Thank you, Andy,
|
|
[83:21] and thank you all dear customers.
|
|
[83:24] Bye bye.
|
|
[83:26] Thank you.
|